![]() |
![]() |
FnS: Analysis of Multilingual NLP ModelsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Mirco Giacobbe. Transformer-based models like BERT and GPT -3 currently dominate the field of natural language processing (NLP), achieving state-of-the-art results on a number of downstream tasks, such as named-entity recognition, question-answering, and sentiment analysis. Given the size and complexity of these models, it is difficult to understand how a particular input will generate a certain output. This falls under the field of interpretability, which aims to understand how an algorithm generates a certain prediction. For a model to be interpretable, it must provide explanations that are understandable, but also accurate in the reasoning for the model’s decision. Our current efforts look into what linguistic knowledge the model encodes through the use of probes. This talk describes efforts to understand how these models handle mixed language text, through the use of auxiliary classifiers to find what linguistic phenomena models encode. This talk is part of the Facts and Snacks series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsTheoretical Physics Journal Club and Group Meeting Jane Langdale Applied Mathematics Seminar SeriesOther talksTime crystals, time quasicrystals, and time crystal dynamics Nonlinear Ghost Imaging For waveform control, Imaging and Communications TBA TBA Plasmonic and photothermal properties of TiN nanomaterials The Electron-Ion Collider |