Human listeners are remarkably adept at recognizing words and extracting sentence meaning even under variability in speech signals. This ability relies on the brain’s dynamic processes to decode acoustic signals into meaningful linguistic units. In order to explore these processes behind speech understanding, our lab employs electroencephalography (EEG) alongside traditional behavioral methods, which allow us to examine how perceptual and cognitive processes unfold in real-time at multiple levels (auditory, phonetic, lexical, and semantic). We are particularly interested in how listeners use linguistic information and allocate cognitive resources during and after speech presentation, and how these processes differ depending on individuals (e.g., differences in cognitive skills and age).