Loading…
PAPIs 2018 has ended
New ML techniques [clear filter]
Wednesday, October 17
 

11:00am EDT

Monitoring AI with AI
Environment misconfiguration or upstream data pipeline inconsistency can silently kill the model performance.
Common production incidents include:
- Data drifts, new data, wrong features
- Vulnerability issues, adversarial attacks
- Concept drifts, new concepts, expected model degradation
- Dramatic unexpected drifts
- Biased Training set / training issue
- Performance issue
In this talk we'll discuss a solution, tooling and architecture that allows machine learning engineer to be involved in delivery phase and take ownership over deployment and monitoring of machine learning pipelines.

Speakers
avatar for Iskandar Sitdikov

Iskandar Sitdikov

ML/Software engineer, Hydrosphere.io
Iskandar Sitdikov is Hydrosphere.io ML engineer with rich practical background both in Machine Lerarning and Big Data fields. His latest tasks lie in the area of research and prototyping data anomalies and concept drifts detection methods in ML production.


Wednesday October 17, 2018 11:00am - 11:20am EDT
Horace Mann

11:30am EDT

Reasoning About Uncertainty at Scale
Freebird models US domestic flights in a way that captures uncertainty at every step. We present a case study of using Bayesian modelling and inference to directly model behavior of aircraft arrivals and departures, focusing on the uncertainty in those predictions. Along the way we will discuss theoretical considerations, highlighting what can go wrong, while emphasizing practical implications around scaling to large data sets.

Speakers
ML

Max Livingston

Data Scientist, Freebird
Max Livingston is a data scientist at Freebird, where he uses Bayesian machine learning techniques to model flight disruptions and last-minute prices. He graduated from Wesleyan University with high honors in Economics and worked in the Research group of the New York Fed before making... Read More →


Wednesday October 17, 2018 11:30am - 11:50am EDT
Horace Mann

12:00pm EDT

Creating Robust Interpretable NLP Systems with Attention
In order to build robust NLP models that can reason from language well, architectures should function more similarly to how our human brains work over pure pattern recognition. Attention is an interpretable type of neural network layer that is loosely based on attention in humans, and it has recently enabled a powerful alternative to RNNs. Attention-based models have produced new techniques and state of the art performances for many language modeling tasks. In this presentation, an introduction to Attention layers will be given along with why and how they have been utilized to revolutionize NLP.

Speakers
avatar for Alexander Wolf

Alexander Wolf

Data Scientist, Dataiku
Alex is a Data Scientist at Dataiku, working with clients around the world to organize their data infrastructures and deploy data-driven products into production. Prior to that, he worked on software and business development in the tech industry and studied Computer Science and Statistics... Read More →


Wednesday October 17, 2018 12:00pm - 12:20pm EDT
Horace Mann

2:00pm EDT

Architectures for big scale 2D imagery
I will present research that I conducted during my Ph.D. at University College London and in collaboration with Google. My primary interest lays in the development of neural architectures for 2D imagery problems in big scale. Will present the recently published analysis of different upsampling methods in the decoder part of visual architectures, together with last week ongoing extension for GANs. Will discuss attention mechanism for text recognition and review for what kind of application it can be useful (automatically updating Google Maps based on Google Street View imagery).

Speakers
avatar for Zbigniew Wojna

Zbigniew Wojna

Founder, Tensorflight
Zbigniew Wojna is deep learning researcher and founder of TensorFlight Inc. company providing instant remote commercial property inspection (for risk factors for reinsurance enterprises) based on satellite and street view type imagery. Zbigniew is currently in the final stage of his... Read More →


Wednesday October 17, 2018 2:00pm - 2:20pm EDT
Horace Mann

2:30pm EDT

Would you have clicked on what we would have recommended?
In this talk, we describe recent work on the offline estimation of recommender system A/B tests using counterfactual reasoning techniques. We can determine whether our customers would have clicked on what we would have recommended by adding stochasticity to our recommendations. This ensures non-zero probability of having shown our new recommendations at some point in the past, which can leverage using a technique known as Pareto-smoothed importance sampling. This allows us to create a low-bias, low-variance estimator of how our recommender systems would have performed had they been deployed.

Speakers
avatar for Peter B. Golbus

Peter B. Golbus

Senior Data Scientist, Wayfair
Peter B. Golbus is a Senior Data Scientist at Wayfair. Peter joined Wayfair directly from his Ph.D. program at Northeastern University where he studied the offline evaluation of search engines with Javed A. Aslam. Four years later, he is still at Wayfair, and is now studying the offline... Read More →


Wednesday October 17, 2018 2:30pm - 2:50pm EDT
Horace Mann
 


Twitter Feed

Filter sessions
Apply filters to sessions.