In the LeAP lab we’re trying to figure out how people make sense out of the world based on noisy, ambiguous, and variable sensory signals. We’re particularly interested in the way that experience with the world shapes perceptual systems, and the ways that perception is entangled with learning, memory, and adaptation.
A particular challenge of perception is that the meaning of any particular sensory signal (an image, a sound, etc.) often depends on the context that it occurs in. This means that effective perception requires learning which context you’re in, remembering previous experiences with the same or other contexts, and adapting your perceptual system to the demands of the current context. A context can be many things. The sights, sounds, and smells of a busy city street are very different from those of a quiet forest. For speech perception, it might be a particular person’s way of speaking or their idiosyncratic accent, which affects how different words get turned into sound waves. Different tasks might also correspond to different contexts: crossing a busy street and hailing a taxi require that you extract very different information from the same sensory signals.
To better understand how perception functions in a variable, multi-task world, we use a wide range of methods, from behavioral experiments with humans to simulations of computational models of perception to measurements of brain activity.