Spring Research Day 2023

General Page Media
Graphic including an arial view of UMN, the Spring Research Day theme, and sponsoring departments.

Center for Applied Translational Sensory Sciences (CATSS) and Center for Cognitive Sciences presents Spring Research Day 2023! This is an annual, university-wide symposium that showcases outstanding student research. This year's theme is:

 

This event is open to any undergraduate and graduate students, staff, and faculty whose interests align with the science of sensory disability.

Diagram with a nose, hand, eye, mouth, and ear icon each in a colored circle connected to an icon of a brain in a circle.

Attendees will be able to

  • Showcase their research with short talks and poster presentations
  • Participate in interactive workshops that discuss research methodologies and strategies for accessible dissemination of research 
  • Network with interdisciplinary researchers across all experience levels

 

Congratulations to our presentation winners!

Judge's Choice Talk:

"Visual snow is affected by contrast adaptation"; Samantha Montoya

People's Choice Talk:

“Physically active lifestyle may protect against age-related decline in ankle position sense acuity”; Jacquelyn Sertic

People's Choice Poster:

"Truth in the Pragmatic Turn"; Brian Tebbitt

 


Please see below for additional details.

Questions: Contact Jacquelyn Sertic at [email protected] or Shelby Ziccardi at [email protected]


 

Event Details

Expand all

Schedule

Thursday April 13th

10:30am Sign in

10:45am - 11:45am Workshop on Measuring the Brain

         10:45am EEG

         11:05am MRI

         11:25am fNIRS 

12:00pm - 1:00pm Workshop on Measuring the Body

         12:00pm Motion capture

         12:30pm Robotic exoskeletons with eye tracking

3:00pm - 4:00pm Panel on Ethical Science within Disabled Communities

4:15pm - 5:15pm Workshop on Accessible Science Communication

         4:15pm Accessible data visualization

         5:15 pm Accessible presentation

----------------------------------------

Friday April 14th

9:00am - 9:40am Check in and morning refreshments

9:40am - 9:45am Welcome and introduction to keynote speaker

9:45am - 10:55am Keynote speaker presentation with Dr. Laurie King, PhD, PT, MCR

11:00am - 12:00pm Student poster session

12:00pm - 12:45pm Lunch 

12:45pm -1:45pm Student oral presentation session 1

1:55pm - 2:55pm Student oral presentation session 2

2:55pm - 3:00pm Closing remarks

3:00 Casual reception

Keynote Speaker

Dr. Laurie King, PhD, PT, MCR

 

Portrait of Dr. Laurie King, a woman with grey-ish brown shoulder length hair and a navy blue top smiling directly at the camera.

Laurie King PhD, PT, MCR is currently a Professor in the Department of Neurology at Oregon Health & Sciences University (OHSU) and co-director of the Balance Disorders Laboratory. She received her Doctor of Philosophy degree from Medical College of Virginia in Richmond, Virginia, in Anatomy and Neurobiology.  Prior to that, she graduated from Mayo School of Heath Sciences in Rochester Minnesota with a Masters in Physical Therapy.  She has over 15 years clinical experience treating neurologically impaired patients. She has a Masters in Clinical Research from OHSU and has over 70 peer-reviewed publications.  She is currently funded by the NIH and the Department of Defense to study balance and gait and rehabilitation in people with neurologic disorders.  Current research interests include the study of gait and balance deficits in people with neurologic deficits including traumatic brain injury and Parkinson’s disease. Specifically, she studies emerging new technologies such as wearable sensors to detect deficits.  She is also interested in rehabilitation techniques and improving best practices for rehabilitation in people with neurologic deficits. For more information visit https://www.ohsu.edu/people/laurie-king-phd-pt-mcr or https://www.ohsu.edu/school-of-medicine/neurology/balance-disorders-laboratory.


Talk title: Sensorimotor Rehabilitation; Finding Meaning in Research

Talk abstractIn this talk I will discuss sensorimotor rehabilitation in terms of new trends and interesting directions from my perspective. I have been in the field both as a physical therapist treating patients with sensorimotor disabilities as well as now working as a professor in the Department of Neurology and co-director the Balance Disorders Laboratory at Oregon Health & Science University. I will discuss how we are using wearable sensors to bring what we have learned in the laboratory out into the community, both for assessment and treatment of sensorimotor deficits. I will also discuss how we are using a model for measuring sensorimotor function and what we are finding in people after concussion. During this talk, I will discuss my journey in the field, both as a woman and as someone who took several twists and turns in my career.

Interactive Workshop Hosts

Workshop on Measuring the Brain: 10:45am - 11:45am

Topics & Hosts

  • MRI - Trevor Day - Doctoral student in the Institute of Child Development (UMN)
  • EEG - Justin Fleming, PhD - Post-Doctoral Associate in Speech Language Hearing Sciences (SLHS) (UMN)
  • fNIRS - Kristi Oeding, AuD, PhD, CCC-A - Assistant Professor/Audiologist in the Department of Speech, Hearing, & Rehabilitation Services at Minnesota State University, Mankato

Content: This workshop will contain an overview of various methods and technologies for measuring neural activity and how they can be integrated into different research projects. Discussion will be encouraged between attendees and hosts!


 

Workshop on Measuring the Body: 12:00pm - 1:00pm

Topics & Hosts:

  • Gaze Tracking with Robotic Exoskeletons - Rachel Hawe, PT, DPT, PhD - Assistant Professor in the School of Kinesiology (UMN) and Ally Richardson - Master's student in the School of Kinesiology (UMN)
  • Motion Capture - Aaron Hellem, PT, DPT - PhD Candidate in the Division of Physical Therapy (UMN)

Content: This workshop will contain an overview of various methods and technologies for measuring body movements and how they can be integrated into different research projects. Discussion will be encouraged between attendees and hosts!


 

Workshop on Conducting Ethical Science within Disabled Communities: 3:00pm - 4:00pm

Panel Members

  • Erin O'Neill, PhD - Visiting Scientist at the Center for Applied and Translational Sensory Science (CATSS) (UMN)
  • Samantha Montoya - Doctoral Student in the Graduate Program in Neuroscience (UMN)
  • Katherine Teece, AuD - Research Audiologist in Speech Language Hearing Sciences (SLHS) (UMN)

Content: This workshop will be an interdisciplinary panel discussion about participatory study design, culturally sensitive research, mixed methodology, and more. Q&A will be encouraged between attendees and hosts!


 

Workshop on Accessible Scientific Communication: 4:15pm - 5:15pm

Topics & Hosts:

  • Accessible Data Visualization - Neha Bansal - Senior director of Application Development in the Office of Information Technology (UMN)
  • Accessible Research Presentations - Kellie Greaves - OIT User Support, Accessible U (UMN)

Content: This workshop will be an overview of methods and best-practices communicating research findings in an accessible way. Discussion will be encouraged between attendees and hosts!

Oral Talk Abstracts

Contextual modulation of laminar BOLD profiles in V1

Authors: Joseph Emerson, Karen Navarro, and Cheryl Olman

Session 1

Abstract:

In primary visual cortex (V1), both long-range lateral connectivity and feedback from higher order visual areas contribute to shaping neural responses based on spatial context. However, it is unclear exactly how and to what extent lateral and feedback connectivity individually contribute to contextual modulation of neural responses in V1. Developments in ultra-high-field functional magnetic resonance imaging (fMRI) have enabled non-invasive imaging of cortical lamina in the human brain, which can be exploited to examine the cortical origins of neural signals underlying blood-oxygenation-level-dependent (BOLD) contrast. We acquired data from six participants using 7T fMRI at 0.6 mm isotropic resolution to measure the influence of visual context on BOLD response profiles across cortical depth in V1. Participants viewed small sine-wave grating disks embedded in large surround gratings with matched spatial frequency and contrast. Segmentation cues were provided by either an offset in relative orientation or an offset in relative phase between center and surround gratings for a total of three context conditions plus a surround-only condition to measure the effects of cortical feedback in the absence of feedforward input. The context conditions allowed us to isolate the effects of orientation-tuned surround suppression (OTSS), a canonical example of contextual modulation in V1, from non-orientation dependent figure-ground modulation (FGM). We found significant modulation of BOLD signal in center-selective voxels in the absence of feedforward input, suggesting that feedback and recurrent connections can drive strong BOLD responses in V1. Surprisingly, we found only weak signatures of OTSS that were primarily localized to superficial layers. We conclude that a large fraction of the BOLD signal measured in V1 cannot be attributed to feedforward mechanisms and that feedback appears to modulate the BOLD response broadly across cortical depth.
Results: Results reflect interviews with 10 care professionals, 12 care partners, and five individuals with cognitive impairment. Identified barriers include care partner stress and burden; reliance on additional technologies such as Bluetooth and smartphone applications; low confidence in the effectiveness of self-programmed hearing aid settings; reduced ability for people with cognitive impairment to problem-solve and/or troubleshoot technology issues; and difficulty obtaining reliable self-report of sensory symptoms from people with cognitive impairment. Facilitators include readily available technical support; concise written instructions with pictures; high-quality instructional videos; and high levels of mutual trust between the person with cognitive impairment and their care partner.
Conclusions: Older adults with cognitive impairment face a complex set of challenges when using OTC hearing aids. To benefit from OTC devices, these individuals may need specialized instructional modes as well as delivery models and fitting paradigms specifically designed to facilitate effective care partner involvement.


 

Physically active lifestyle may protect against age-related decline in ankle position sense acuity

Authors: Jacquelyn Sertic and Jürgen Konczak

Session 1

Abstract:

Ankle proprioception is essential for balance control. However, ankle proprioception can decline in older adulthood and has been linked to a higher incidence of falls. This study examined whether physically active older adults are spared from such proprioceptive decline. Using the Ankle Proprioceptive Acuity System (APAS) and applying an adaptive psychophysical testing paradigm, ankle position sense acuity in 57 neurotypical middle-aged and older adults (50-80 years) and 14 young adults (18-30 years) was assessed. A participants’ unloaded foot was passively rotated from a neutral joint position to a reference (15 or 25 deg plantarflexion) and a comparison position (< reference). Participants verbally indicated which position was further from neutral. Appropriate stimulus-response functions were fitted and Just-Noticeable-Difference (JND) thresholds and Uncertainty Areas (UA) were derived. The JND threshold is a measure of perceptual bias, while UA is a measure of precision. The main finding of the study: Between the middle-aged and older adult groups (50-60, 60-70, 70-80 years) no significant differences were found in JND threshold nor UA. These data indicate that active older adults may be spared from age-related decline in ankle position sense. These findings encourage older adults to become or remain active during aging.



 

Dyad motor learning in a wrist-robotic environment: Learning together is better than learning alone

Authors: Leoni V. Winter, Stefan Panzer, and Juergen Konczak

Session 2

Abstract:

Background: Robot-assisted therapy has become an established neurorehabilitation tool. However, its use is limited by high therapy cost and access to robotic devices. Dyad learning is a learning paradigm where participants alternate between observation and physical practice. Evidence indicates that dyad learning leads to better motor outcomes and reduced practice time compared to physical practice alone. Implementing dyad learning in robot-assisted rehabilitation has the potential to increase patient outcomes and learning speed. This study aims to determine the effects of dyad learning on motor performance in a controlled wrist-robotic environment to evaluate its potential use in rehabilitation settings.
Methods: Forty-two participants were randomized into three groups (N=14): Dyad learning, physical practice and control. Participants practiced a 2 degree-of-freedom gamified motor task for 20 trials using a custom made wrist-robotic device. Motor performance was measured at baseline, the end of training, and 24 hour retention.
Results: Motor performance did not differ between groups at baseline and all groups improved their performance compared to baseline (p<0.05). However, the dyad group outperformed the other groups at the end of training (p=0.001; Cohen’s d=0.954) and at retention (p=0.012; d=0.617).
Conclusion: Compared to physical practice alone, practicing collaboratively by alternating between physical and observational practice leads to superior motor outcomes after practicing a robot-aided gamified motor task. The results demonstrate that implementing dyad learning in robot-assisted motor learning can reduce required practice time and improve motor performance beyond physical practice only. We conclude that applying dyad learning in robot-assisted rehabilitation regimens can help to increase patients’ outcome and reduce therapy costs.


 

Barriers/Facilitators to Over-the-Counter Hearing Aid Use in People With Cognitive Impairment

Authors: Dana Urbanski, Peggy Nelson, Rajean Moone, Tetyana Shippee, and Joseph Gaugler

Session 2

Abstract:

Objectives: Evidence suggests that older adults with intact cognition can successfully self-select, program, and manage over-the-counter (OTC) hearing aids. However, to date, it is unknown whether individuals with cognitive impairment can understand and use OTC hearing aids. The present study is an exploratory qualitative examination of stakeholder-perceived barriers to and facilitators of OTC hearing aid use in people with cognitive impairment.
Design: Semi-structured interviews are conducted with three groups: 1) community-dwelling older adults with cognitive impairment and hearing loss; 2) care partners of older adults with cognitive impairment and hearing loss; and 3) direct care professionals. Interviews are transcribed and analyzed following established steps for thematic analysis. From the resulting themes, we identify key barriers and facilitators.
Results: Results reflect interviews with 10 care professionals, 12 care partners, and five individuals with cognitive impairment. Identified barriers include care partner stress and burden; reliance on additional technologies such as Bluetooth and smartphone applications; low confidence in the effectiveness of self-programmed hearing aid settings; reduced ability for people with cognitive impairment to problem-solve and/or troubleshoot technology issues; and difficulty obtaining reliable self-report of sensory symptoms from people with cognitive impairment. Facilitators include readily available technical support; concise written instructions with pictures; high-quality instructional videos; and high levels of mutual trust between the person with cognitive impairment and their care partner.
Conclusions: Older adults with cognitive impairment face a complex set of challenges when using OTC hearing aids. To benefit from OTC devices, these individuals may need specialized instructional modes as well as delivery models and fitting paradigms specifically designed to facilitate effective care partner involvement.


 

Genotype-phenotype correlation in a cohort of patients with retinitis pigmentosa (RP) and Leber congenital amaurosis (LCA) by next-generation sequencing

Authors: Richard Sather III, Michael Simmons, Tahsin Khundkar, Jacqueline Ihinger, Glenn Lobo, and Sandra Montezuma

Session 2

Abstract:

Purpose: This study identifies pathogenic variants in patients diagnosed with syndromic / non-syndromic RP and LCA. These relationships are important because future therapies may cater to the specific genetic variants underlying these conditions and the degree of their physical expression.
Methods: We performed a retrospective analysis of all patients with RP and LCA who presented to our institution between May 1, 2015 – August 4, 2022. A database was created to record history and examination, diagnostic imaging, and the results of genetic testing.
Results: 151 patients with non-syndromic RP, 48 with syndromic RP, and 31 patients with LCA were included. For the RP cohort, presenting symptoms included nyctalopia (85.4%) photosensitivity/hemeralopia (60.5%), and decreased color vision (55.8%). On OCT, 73.6% had an ellipsoid zone band width of less than 1500 μm. 99.0% had fundus autofluorescence (AF) findings of a hypo- or hyper-fluorescent ring within the macula and/or peripheral hypo-AF. 54.3% had a diagnostic pathogenic gene variant identified. The top identifiable pathogenic variants were USH2A (14.3%), RPGR (7.5%), and MYO7A (6.8%). For the LCA cohort, presenting symptoms included nyctalopia (85.7%), photosensitivity/hemeralopia (57.9%) with, and decreased color vision (83.3%). 87.5% had an ellipsoid zone band width on OCT scan of less than 1500 μm, and 88.9% had a hypo- or hyper-AF ring within the macula and/or peripheral hypo-AF. 72.4% had a diagnostic pathogenic gene variant identified. The top identifiable diagnostic pathogenic variants included RPE65 (19.0%), CEP290 (19.0%), and GUCY2D (19.0%).
Conclusions: Patients with RP and LCA often present with advanced disease independent of the genetic result. In our population, the RP phenotype is more genetically heterogenous than LCA. In addition, patients with LCA have worse vision than those with RP. Finally, LCA and RP patients share many of the same presenting symptoms and structural findings.

Poster Session Abstracts

Spatial Configuration of Touch Actions in a Mouse Bandit Decision Making Task

Authors: Dana Mueller, Erin Giglio, Cathy Chen, and Nicola Grissom

Abstract

In bandit decision making tasks, the challenge of sampling between options versus settling on a currently best option is better known as the explore-exploit tradeoff. Across species, there is substantial evidence that explore and exploit can be defined as neurobehavioral states using a hidden markov model (HMM) approach. Using a restless bandit task, in which the reward probability of each choice changes randomly and independently across trials, we see that animals enter self-initiated periods of exploration and exit these to begin exploiting an option. In mice, the use of touchscreen chambers allows us to record precise locations for each mouse touch, allowing us to consider detailed information about how decisions translate into physical motion. Transitioning between explore and exploit states could be considered as an online change in cognitive flexibility, which may be reflected in motor and behavioral flexibility. We took advantage of the data on touch locations to test whether individual trials labeled as exploit by our HMM are accompanied by more stereotyped motor behaviors in choice selection than the same choices during explore states. Thirty-two 129/b6j F1 mice (16 male and 16 female) were tested on restless bandit schedules. We find that successive touches to the same choice are further apart while an animal is in an explore state than in an exploit state, suggesting greater motor stereotypy when exploiting an option. Male mice tended to have a wider range of nosepoke coordinates than female mice do across states, suggesting different levels of coordination between motor and cognitive systems across sexes. This novel analysis has the potential to allow all labs using touchscreens to investigate how stereotyped motor behaviors may be captured in response data and reflect hidden contributions to decision flexibility.


 

Cochlear Implant Listeners Perception of Temporal and Spectral Voicing Cues

Authors: Lexi Olson and Matthew Winn

Abstract

Objectives: Perception of voice onset time (VOT) is often used as an index of auditory temporal processing. However, some VOT stimuli are less appropriate for this goal because of the subsequent vowel. For example, there are complementary frequency cues for the low vowel /ɑ/, but not for the high vowel /i/, meaning vowel context affects studies on timing perception. Cochlear implant listeners who have aidable hearing can perceive those frequency cues. This could be a key benefit, but also pose a problem for researchers who are looking to determine the ability to understand only timing cues. We hypothesize that VOT perception will be better when the following vowel is /ɑ/, because it contains both temporal and spectral cues for voicing, whereas /i/ provides only temporal cues.
Design: Listeners with cochlear implants or with normal hearing completed a phonemic identification task where they categorized sounds varying by VOT between /d/ and /t/. Vowel formant transitions naturally covaried with VOT duration. The vowel was either /ɑ/, which allowed perception of spectral contours, or /i/ which did not. The onset consonant of each word was categorized as voiced or voiceless and modeled using a binomial statistical model that included VOT, vowel and hearing as fixed main and interacting effects.
Results: Both listener groups reliably categorized voicing contrasts. Performance was better in the /ɑ/ context, suggesting use of spectral cues. This is likely due to differences in the availability of formant transitions.
Conclusions: Spectral cues in vowels can play a role in perceiving voice onset time. Vowels that contain formant contours can create a confounding variable when studying VOT perception. Perception of formant transitions can interfere with measuring auditory temporal perception.
Learner outcomes: The audience will be able to identify the confounding variables that vowels can have when conducting experiments related to VOT and perception of timing cues.


 

Central Limitations on FM Sensitivity

Authors: Kelly Whiteford and Penelope Corbett

Abstract:

Frequency modulation (FM) is ubiquitous in natural sounds such as speech and music, making FM necessary for the efficient processing of information. Humans are particularly sensitive at perceiving FM at low carrier frequencies (< ~5 kHz) with slow modulation rates (< 5-10 Hz). This sensitivity is thought to be afforded by precise neural phase locking to temporal fine structure (TFS) in the auditory nerve. At faster modulation rates and/or higher carrier frequencies, FM is instead thought to be coded by a coarser frequency-to-place-mapping code, where FM is converted to amplitude modulation via cochlear filtering. Research showing that F0 perception for steady (unmodulated) tones is possible even for spectrally resolved harmonics above the putative limits of phase locking challenges assumptions about the necessity of TFS cues for eliciting pitch, and could have implications for FM coding. For instance, it is possible that poorer FM detection at high carrier frequencies and fast rates could be explained by a weaker pitch percept rather than the absence of TFS cues. To investigate which one is the greater limiting factor for FM detection, we presented listeners with an adaptive FM task using complex tones that varied in a) harmonicity, b) modulation rate, and c) whether their spectra included harmonics within or above the putative limits of phase-locking. A repeated-measures ANOVA revealed significant interactions between harmonicity and modulation rate as well as modulation rate and harmonic condition. As predicted, an overall slow-rate benefit was observed for all conditions, even those for which TFS was not accessible, but this benefit was significantly greater for harmonic than inharmonic complexes. Even when the necessary requirements for optimal TFS coding were in place, a weaker perception of the F0 impaired FM perception, thus providing more evidence in favor of the unitary place code theory for pitch and FM perception.


 

Atypical Activation of Laryngeal Somatosensory-motor cortex during vocalization in People with Unexplained Chronic Cough

Authors: Jiapeng Xu, Stephanie Misono, Jason Kang, Jinsok Oh, and Jürgen Konczak

Abstract

Importance: Chronic cough (CC) affects up to 10% of the general population, yet its etiology is not well understood. Enhancing our understanding of how peripheral and central neural processes contribute to CC is essential for treatment design.
Objective: Determine whether people with CC exhibit signs of abnormal sensory and motor neural processing over laryngeal sensorimotor cortex during voluntary laryngeal motor activity such as vocalization.
Design: The study followed a cross-sectional design. In a single visit, electroencephalographic signals were recorded from people with CC and healthy controls during voice production.
Participants: A convenience sample of 13 individuals with chronic cough and 10 healthy age-matched controls participated.
Outcome Measures: 1) Event-related spectral perturbation over the laryngeal area of somatosensory-motor cortex between 0-30 Hz. 2) Event-related coherence as a measure of synchronous activity between somatosensory and motor cortical regions.
Results: In the CC group, the typical movement-related desynchronization over somatosensory-motor cortex during vocalization was significantly reduced across theta, alpha and beta frequency bands when compared to the control group.
Conclusions and Relevance: The typical movement-related suppression of brain oscillatory activity during vocalization is weak or absent in people with chronic cough. Thus, chronic cough affects sensorimotor cortical activity during the non-symptomatic, voluntary activation of laryngeal muscles.


 

Repeated moments of listening effort build to listening fatigue: A pilot study

Authors: Michael Smith and Matthew Winn

Abstract

Objective: People with hearing impairment who wear a cochlear implant (CI) report listening effort as a major barrier to successful social communication. Effort appears to lead to listening-related fatigue, but the mechanistic connection between effort and fatigue unknown. One situation that is effortful is mentally repairing missing words. We hypothesize that repeatedly needing to mentally repair words will lead to greater levels of fatigue, as indicated by increases in reaction time (RT) and decreases in sentence recognition.
Design: CI listeners completed three tasks: 1) pre-test RT 2) a listening task; and 3) post-test RT. RTs were measured for an inhibition task, followed by a fatigue questionnaire. During the listening task, they heard blocks of sentences of either normal speech or a single word was missing but recoverable by context. Pupil dilation was measured during the listening task to assess momentary effort. After listening, they completed a sentence recognition task, where a sentence was shown as text, and they were asked if they heard it previously. After, listeners completed the RT task and questionnaire to measure how effortful listening impacted fatigue. This three-phase testing sequence was completed four times in total.
Results: Preliminary pupillometry results show larger increases in pupil dilation and slower decay of pupil size over time when needing to mentally repair a missing word compared to intact sentences. Participants also reported higher levels of fatigue at the end of the experiment relative to baseline. RTs and recognition memory showed no evidence of fatigue.
Conclusions: Preliminary results indicate the potential impact of repeated moments of elevated listening effort on accumulated fatigue. Future results will have implications for future development of rehabilitation strategies to address patient concerns of effort and mental fatigue.


 

Masking Effects of Amplitude Modulation on Frequency-Modulated Tones

Authors: Kelly Whiteford, Neha Rajappa, PuiYii Goh, and Andrew Oxenham,

Abstract:

Human sensitivity to frequency modulation (FM) is best for low carrier frequencies (f c &lt; ~4-5kHz) and slow modulation rates (f m &lt; 5-10 Hz) that are most relevant for speech and music. This high sensitivity is thought to be afforded by neural phase locking to temporal fine structure (TFS), providing precise temporal information about the stimulus periodicity. At faster rates and higher carriers at slow/fast rates, TFS cues may no longer be available, with sensitivity to FM instead relying on amplitude modulation (AM) of the temporal envelope, produced by FM-to-AM conversion via cochlear filtering. The AM produced by FM-to-AM conversion differs from traditional AM, in that the sweeping of FM through the tonotopic axis results in AM cues that are out of phase between tonotopic locations with characteristic frequencies above and below the carrier frequency. Imposing AM on an FM carrier has been proposed as a way to disrupt envelope but not TFS coding. This study tested AM masking of FM where FM detection was measured for carrier frequencies of 1 and 6 kHz and modulation rates of 2 and 20 Hz with and without AM masking. Preliminary results suggest that FM and simulated FM sensitivity is more affected by AM at fast rates at both low and high carrier frequencies than slow rates. Overall, the results do not provide support for the idea that AM interference can be used to distinguish between TFS- and envelope-based codes for FM.


 

Hearing Safety of Transcranial Ultrasound Parameters for a Novel Hearing Aid Technology

Authors: John Basile, Gerardo Rodriguez, and Hubert Lim

Abstract:

Ultrasound (US) research has grown rapidly in the past decade, showing exciting potential therapeutic applications for noninvasively modulating brain regions and creating transient openings in the blood-brain-barrier for drug delivery. While investigating the abilities of US neuromodulation, our lab discovered that US applied to the head readily activates the auditory system (Guo et al., Neuron, 2018). Due to the potential applications of US induced auditory activation, our group is interested in characterizing safe parameter settings of US for the hearing system. To characterize safe parameter settings, we collected auditory brainstem responses (ABRs) and electrocochleography (ECochG) in response to air-conducted broadband noise and pure tones (2, 4, 8, 12, 20, and 30 kHz) at varying levels (10-70 dB SPL) before and after US stimulation in anesthetized guinea pigs. Control data was collected for characterizing the stability of the recording protocol and for a standard noise-induced hearing loss (NIHL) comparison. We assessed thresholds and amplitudes over time to identify changes that are associated with hearing damage or the experimental setup. Some tested US parameters showed neurophysiological changes associated with hearing loss especially with parameters using unmodulated, unramped US. Adding ramps to the US stimuli reduced the extent of observed changes. Parameter settings used to effectively encode complex information (i.e. guinea pig vocalizations) to the auditory system do not show those neurophysiological changes associated with hearing loss at lower pressures and more resembled the stability control data. Changes that occurred include threshold shifts that were most prevalent in the high frequencies and a reduction in ABR wave amplitudes. The patterns of hearing loss resembled those of standard NIHL. Future studies will include characterization of the safe and effective US parameters in large animal models that better mimic the head size of humans.


 

Characterizing and decoding visual percepts of real objects in a blind individual using fMRI

Authors: Jesse Breedlove, Logan Dowdle, Thomas Naselaris, and Cheryl Olman

Abstract

The brain can generate vivid and concrete visual perceptions in the absence of retinal input (e.g. hallucinations and dreams) which are associated with visual cortex activation. However, these percepts are, by definition, disconnected from reality. Can the visual cortex support accurate visual representations of one's external environment using only non-visual sensory modalities? In other words, can the brain see without the eyes?  We explore this question through a case study of patient NS, a woman who lost her sight to retinal degeneration and now “sees” the objects she infers are around her through touch, proprioception, and sound. Unlike imagery, these representations are determinate, involuntary, and persist as long as she infers the object to be within her line-of-sight. We triggered NS's non-optic visual perceptions while recording fMRI BOLD activity by having NS touch and place 3D objects on a plexiglass tray that held them suspended in her field-of-view. A GLM analysis found significant patterns of activation in her visual cortex that resembled the patterns of activation in a sighted control who viewed the same objects through typical retinal vision. This was in contrast to the very little significant activity found when both participants imagined seeing the same objects, substantiating NS’s reports that her non-optic sight is phenomenologically more similar to the retinal vision that she lost rather than mental imagery. Furthermore, activity patterns within the visual cortex could be used to accurately decode the objects that NS was “seeing”  but no longer touching, and changes in object size resulted in a corresponding change in the topological extent of the increased brain activity. This demonstrates that the activity associated with non-optic sight is both specific to and representative of the individual stimuli. These findings suggest that the visual cortex can support concrete visual experiences that accurately interpret non-retinal sensory input.


 

Perception of Prosodic Cues to Correct Mistakes By Listeners with Normal Hearing and Cochlear Implants

Authors: Harley Wheeler, Tereza Krogseng, and Matthew Winn,

Abstract

Prosody is used to mark important information in speech yet is not an integral part of speech recognition testing among people with hearing difficulty. While a listener may correctly perceive words spoken, they may not perceive meaningful emphasis on a certain word, which could be critically important in a variety of everyday situations. This study introduces a new paradigm for assessing perception of prosodic cues in listeners with normal hearing and with cochlear implants, who have notorious difficulty perceiving pitch. Stimuli consisted of spoken sentences where one word (in various sentence positions) was emphasized in a manner that indicated that a prior statement was incorrect in a specific way. Participants used a visual analog scale to mark the timing and degree of emphasis aligned with the target words. Here the perceptual data are linked with acoustic measures of voice pitch contour, intensity, duration, and vocal quality to characterize how contrastive stress cues are recovered by listeners with and without hearing impairment.


 

Perspectives on Interaction: The Disability Community and Law Enforcement

Authors: Ryan Schwantes, Grace Song, and Luke Drummer

Abstract

Informed, accommodative policing is vital to the safety and well being of disabled people and communities. Police interact with people with disabilities at a disproportionately higher rate compared to non-disabled people, yet these interactions are more likely to result in violence and wrongful arrests. One of the biggest challenges of the policing of disabled people is how varied and nuanced disabilities can be both between and within different disabled communities. In recent years, efforts have been made to better accommodate people with disabilities, yet the effectiveness of these efforts is unclear. Here we seek to elucidate the current state of affairs and the effectiveness of the current methods of accommodations. We created a survey for both people with disabilities and law enforcement officers in order to analyze their interactions holistically and determine where improvements may be made. By surveying both the civilian and the law enforcement side of these interactions we were able to see where problems are perceived and whether the other side is aware of those problems. From this we created a series of recommendations which aim to ameliorate these issues and begin steps towards a safer future.


 

This event is generously sponsored by:

National Science Foundation Research Traineeship Program in Sensory Science: Optimizing the Information Available for Mind and Brain : Grant No.: DGE- 1734815 University of Minnesota's Student Unions & Grants Initiative : Grant No.: 9371 Coca-Cola's Academic Grant Program
National Science Foundation logo which is NSF overlayed onto a globe surrounded by a gold border.
Student Unions & Activities Grants Initiatives in black text
Coca-Cola logo which says Coca-Cola in curvy red text