Human syntactic processing is generally remarkably robust and accurate. In this talk, I will go through some recent psycholinguistic research on sentence processing which can give us a glimpse into how human parsing works. This talk will focus on some crucial properties (such as incrementality and prediction), but will also highlight some informative cases of where humans struggle to analyse a sentence correctly. I will also briefly describe current psycholinguistic frameworks for modelling human processing, which try to account for these cases.
Until recently, a large fraction of constituency parsing research consisted of finding clever ways of "augmenting" a base treebank grammar with extra information to work around the limitations of dynamic programming-based parsing algorithms. Nowadays, the art of grammar engineering for statistical parsing is slipping away, as neural network models are now able to easily obtain state of the art performance with basically no grammar engineering. What's going on? In this talk, I'll explore this trend and reflect on the importance of grammar in the modern era. Along the way, I'll also touch on some related issues affecting parsing (both syntactic and semantic) that I've encountered in my time in industry and discuss a few lessons learned.