Loading…
PAPIs 2018 has ended
Wednesday, October 17 • 12:00pm - 12:20pm
Creating Robust Interpretable NLP Systems with Attention

Sign up or log in to save this to your schedule and see who's attending!

Feedback form is now closed.
In order to build robust NLP models that can reason from language well, architectures should function more similarly to how our human brains work over pure pattern recognition. Attention is an interpretable type of neural network layer that is loosely based on attention in humans, and it has recently enabled a powerful alternative to RNNs. Attention-based models have produced new techniques and state of the art performances for many language modeling tasks. In this presentation, an introduction to Attention layers will be given along with why and how they have been utilized to revolutionize NLP.

Speakers
avatar for Alexander Wolf

Alexander Wolf

Data Scientist, Dataiku
Alex is a Data Scientist at Dataiku, working with clients around the world to organize their data infrastructures and deploy data-driven products into production. Prior to that, he worked on software and business development in the tech industry and studied Computer Science and Statistics... Read More →


Wednesday October 17, 2018 12:00pm - 12:20pm
Horace Mann

Attendees (54)