Kuvaus
NLP research has been fluctuating between two extreme models of computation: finite computers and universal computers. Often a practical solution combines both of these two extremes because formally powerful models are simulated by physical machines that approximate them. This is especially true for recurrent neural networks whose activation vector is the key to deeper understanding of their emergent finite-state behavior. However, we currently have only a very loose characterization for the finite-state property in neural networks.In order to construct a hypothesis for a possible bottom-up organization of the state-space of activation vectors of RNNs, I compare neural networks with bounded Turing machines and finite-state machines, and quote recent results on finite-state models for semantic graphs. These models enjoy the nice closure properties of weighted finite-state machines.
In the end of the talk, I sketch my vision for neural networks that perform finite-state graph transductions in real time. Such transductions would have a vast variety of applications in machine translation and semantic information retrieval involving big data.
| Aikajakso | 10 marrask. 2017 |
|---|---|
| Pidetty | Information Sciences Institute, University of Southern California, Los Angeles, Yhdysvallat (USA) |
| Tunnustuksen arvo | Paikallinen |
Tähän liittyvä sisältö
-
Aktiviteetit
-
Henrik Björklund
Aktiviteetti: Vierailijan isännöinnin tyypit › Isännöity akateeminen vierailu Helsingin yliopistossa
-
Vaikutukset
-
Graph Encoding Schemes for Syntactic and Semantic Labeling in Public Information Retrieval Infrastructures
Vaikutus: !!Impact › Muut vaikutukset, Julkiset palvelut ja yhteiskunnan toiminta