University of Birmingham > Talks@bham > Facts and Snacks > FnS: Analysis of Multilingual NLP Models

FnS: Analysis of Multilingual NLP Models

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mirco Giacobbe.

Transformer-based models like BERT and GPT -3 currently dominate the field of natural language processing (NLP), achieving state-of-the-art results on a number of downstream tasks, such as named-entity recognition, question-answering, and sentiment analysis. Given the size and complexity of these models, it is difficult to understand how a particular input will generate a certain output. This falls under the field of interpretability, which aims to understand how an algorithm generates a certain prediction. For a model to be interpretable, it must provide explanations that are understandable, but also accurate in the reasoning for the model’s decision. Our current efforts look into what linguistic knowledge the model encodes through the use of probes. This talk describes efforts to understand how these models handle mixed language text, through the use of auxiliary classifiers to find what linguistic phenomena models encode.

This talk is part of the Facts and Snacks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on talks.cam from the University of Cambridge.