Category:

Lecture on ‘How a brief exposure to music influences our judgment and decision making’ by Prof. Joydeep Bhattacharya

November 18, 2020 in 2020

The talk was conducted on 18 November 2020 by Dr Bhattacharya from Goldsmiths University of London. He covered the following in his talk:

Abstract: Decision making is an integral part of our lives; we make judgments and decisions at every step in our lives. It ranges from mundane perceptual decisions to complex cognitive ones. Traditionally it is assumed that our decision making is purely cognitive and rational, devoid of emotional influence. But recent evidence suggests that emotion and cognition are intricately related. In this talk, I will discuss and present experimental findings on how music could influence various types of judgment and decision making. Specifically, I will show how listening to brief musical excerpts could influence how we perceive faces, judge complex pictures, process words, and even judge brightness. I will argue that music, even short excerpts, can indeed influence a wide range of decision-making process, and such cross-modal transfer of musical emotions are largely implicit, i.e. occurring under our level of conscious awareness.

Lecture on ‘Limits to adaptability in SOV languages’ by Dr. Samar Husain

November 11, 2020 in 2020

The talk was conducted on 11 November 2020. Dr. Samar Husain from IIT Delhi, covered the following in his talk:

Abstract: “Processing of Subject-Object-Verb (SOV) languages has been argued to involve robust clause-final verbal prediction. Robust verbal prediction and its maintenance has been shown to lead to facilitation during sentence comprehension and has been attributed to the parser’s adaptability to certain typological features (e.g., word order) in such languages. In this talk I will argue that the parser’s adaptability for robust prediction in SOV languages is limited. To this effect, I will provide converging evidence from corpus-based studies, behavioural experiments as well as computation modelling. In particular, I will show that as the nature of preverbal linguistic context becomes complex, comprehension suffers in these languages. This suggests the overarching role of working-memory constraints during sentence comprehension.”

Lecture on ‘fMRI and Machine Learning’ by Dr. Ayan Sengupta

October 23, 2020 in 2020

The talk was conducted on 23 October 2020. Dr. Sengupta is a Research Affiliate at Cambridge University and an MRI Research Fellow at the Royal Holloway, University of London. His research is centered on neuroimaging, specifically Functional Magnetic Resonance Imaging (fMRI). His main interests are in ultra-high field human fMRI and the application of machine learning and other computational modelling in understanding how visual and tactile information are represented in different parts of the brain.

The talk covered basics of functional MRI analysis and applications of machine learning for decoding the mind.