Fwd: LTI Colloquium, September 25th

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Fwd: LTI Colloquium, September 25th

George Duncan-2

George Duncan
Emeritus Professor of Statistics, Carnegie Mellon University
georgeduncanart.com
See posts on Facebook, Twitter, and Instagram
Land: (505) 983-6895  
Mobile: (505) 469-4671
 
My art theme: Dynamic exposition of the tension between matrix order and luminous chaos.

"Attempt what is not certain. Certainty may or may not come later. It may then be a valuable delusion."

From "Notes to myself on beginning a painting" by Richard Diebenkorn. 

"It's that knife-edge of uncertainty where we come alive to our truest power." Joanna Macy.




---------- Forwarded message ---------
From: John Friday <[hidden email]>
Date: Mon, Sep 21, 2020 at 12:44 PM
Subject: LTI Colloquium, September 25th
To: <[hidden email]>


Hi Everyone,

This week we have a double feature at the LTI Colloquium. Both Shruti Rijhwani and Zirui Wang will be presenting talks on Friday, September 25th from 1:30 to 2:50 PM EST. The talks will be presented on Zoom, passcode 883155.

Here's the information on this week's speakers and their topics.

Shruti Rijhwani is a PhD student at the Languages Technologies Institute at Carnegie Mellon University. Her primary research interest is multilingual natural language processing, with a focus on low-resource and endangered languages. Her research is supported by a Bloomberg Data Science Ph.D. Fellowship. Much of her published work focuses on improving named entity recognition and entity linking for low-resource languages and domains. 

Title: Zero-shot Neural Transfer for Cross-lingual Entity Linking


Abstract:  Cross-lingual entity linking maps a named entity in a source language to its corresponding entry in a structured knowledge base that is in a different (target) language. While previous work relies heavily on bilingual lexical resources to bridge the gap between the source and the target languages, these resources are scarce or unavailable for many low-resource languages. To address this problem, we investigate zero-shot cross-lingual entity linking, in which we assume no bilingual lexical resources are available in the source low-resource language. Specifically, we propose pivot-based entity linking, which leverages information from a high-resource "pivot" language to train character-level neural entity linking models that are transferred to the source low-resource language in a zero-shot manner. With experiments on nine low-resource languages and transfer through a total of 54 languages, we show that our proposed pivot-based framework improves entity linking accuracy 17% (absolute) on average over the baseline systems for the zero-shot scenario. Further, we also investigate the use of language-universal phonological representations which improves average accuracy (absolute) by 36% when transferring between languages that use different scripts.


Zirui Wang is currently a PhD student at the Language Technologies Institute (LTI). He works on transfer learning, meta learning, and multilingual models. He is advised by Jaime Carbonell, Yulia Tsvetkov, and Emma Strubell.    


Title: Cross-lingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework


Abstract:  Learning multilingual representations of text has proven a successful method for many cross-lingual transfer learning tasks. There are two main paradigms for learning such representations: (1) alignment, which maps different independently trained monolingual representations into a shared space, and (2) joint training, which directly learns unified multilingual representations using monolingual and cross-lingual objectives jointly. In this work, we first conduct direct comparisons of representations learned using both of these methods across diverse cross-lingual tasks. Our empirical results reveal a set of pros and cons for both methods, and show that the relative performance of alignment versus joint training is task-dependent. Stemming from this analysis, we propose a simple and novel framework that combines these two previously mutually-exclusive approaches. We show that our proposed framework alleviates limitations of both approaches and can generalize to contextualized representations such as Multilingual BERT.    


Please reach out to me if you have any questions.


Best wishes,

John Friday




   



- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 

Zirui & Shruti_poster.pdf (355K) Download Attachment