Loading…
PAPIs 2018 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Applying ML in the real world [clear filter]
Tuesday, October 16
 

11:00am

AI for Software Testing with Deep Learning: Is it possible?
Using AI for testing software is an emerging field in software engineering. So, how to do it effectively? Yet a big problem to be solved. In this presentation, we will describe how Convolutional Neural Networks (CNN) and Deep Reinforcement Learning can be used for this new endeavor by outlining the challenges, mistakes, and workarounds that we faced to be successful in using AI models to build systems that can really learn from the software they are testing. We will discuss some lessons learned when using pre-trained CNN models, Image Detection APIs and CNN's built from scratch for this purpose.

Speakers
avatar for Emerson Bertolo

Emerson Bertolo

Data Scientist, Stefanini
Data Scientist at Stefanini Rafael Security & Defense Company, with in-depth expertise in data modeling and data extraction in a variety of business scenarios and constraints. For the last 2 years, I deep dived into Machine Learning and Deep Learning by building AI models using a... Read More →


Tuesday October 16, 2018 11:00am - 11:20am
Horace Mann

11:30am

Genetic Programming in the Real World: A Short Overview
ML is now a commercial and industrial technology, and while many successful algorithms exist there are still areas that require new developments. For instance, one issue with many ML methods is that they generate black-box models. Genetic Programming (GP) generates symbolic models an expressions, that can be used in many different domains. However, GP is not widely used by ML practitioners , it is still mostly an academic tool, but this is changing. This talk will present a short overview of how GP can be used to solve ML tasks, intended as a starting point for applied researchers and developers.

Speakers
avatar for Leonardo Trujillo

Leonardo Trujillo

Research Professor, Instituto Tecnologico de Tijuana
Received a degree in Electronic Engineering (2002) and a Masters in Computer Science (2004) from the Instituto Tecnológico de Tijuana, Mexico. He also received a doctorate in Computer Science from CICESE research center, in Ensenada, Mexico (2008), developing Genetic Programming... Read More →


Tuesday October 16, 2018 11:30am - 11:50am
Horace Mann

12:00pm

Machine Learning Interpretability in the GDPR Era
Despite breakthroughs in statistical performance, the widespread adoption of algorithmic decision making has also led to a rise in “black box” machine learning with unintended negative consequences. In response, attention towards methods that improve the interpretability and understanding of machine learning has also increased. These methods are useful not only for explaining how decisions are made, but also for improving models and ultimately gaining trust in adopting machine learning systems. While using interpretable machine learning methods has innate advantages, the explicit requirement of interpretability has only recently been formalized as part of the General Data Protection Regulation, implemented by the EU as of 25 May 2018. This regulation, which protects the privacy and usage of data of EU citizens, specifically outlines a “right to explanation” in regards to algorithmic decision making. This talk explores the definition of interpretability in machine learning, the trade-offs with complexity and performance, and surveys the major methods used to interpret and explain machine learning models in the context of GDPR compliance.

Speakers
avatar for Gregory Antell

Gregory Antell

Product Manager & Machine Learning Scientist, BigML


Tuesday October 16, 2018 12:00pm - 12:20pm
Horace Mann

2:00pm

Facial Recognition Adversarial Attacks, Policy and Choice
What are the policy and societal implications of the unprecedented capability for automated, real time identification and tracking of individuals? What tools exist or could exist for registering and enforcing user choice? What should public policy be around government and private use of biometric data? We demonstrate the technical feasibility of facial recognition adversarial attacks, describe our FOIA request about federal use of facial recognition at airports and borders and invite community discussion and technical contributions to our open sourced prototype.

Speakers
avatar for Gretchen Greene

Gretchen Greene

Greene Strategy and Analytics/ MIT Media Lab
Gretchen Greene, founder and CEO of Greene Strategy and Analytics, is a computer vision scientist, machine learning engineer and lawyer advising governments and private companies on AI use, strategy and policy. Greene has been interviewed by Forbes China, the Economist and the BBC... Read More →


Tuesday October 16, 2018 2:00pm - 2:20pm
Horace Mann

2:30pm

Developing and Deploying ML Algorithms in a Clinical Setting
The development and deployment of machine learning models is fraught with complexity exceeding that which is typically found in traditional software. Operating in a clinical environment introduces further difficulties that must be resolved to produce a successful product. In this talk, we will discuss the challenges of applying machine learning to medical imaging and highlight potential solutions to these problems.

Speakers
avatar for Neil Tenenholtz

Neil Tenenholtz

Director of Machine Learning, MGH & BWH Center for Clinical Data Science
Neil Tenenholtz is the Director of Machine Learning at the MGH & BWH Center for Clinical Data Science, where his responsibilities include the training of novel deep learning models for clinical diagnosis and the development of robust infrastructure for their deployment in the clinical... Read More →


Tuesday October 16, 2018 2:30pm - 2:50pm
Horace Mann

3:00pm

The Right Amount of Trust for AI
The key to building systems that are integrated into people’s lives is trust. If you don’t have the right amount of trust, you open the system up to disuse and misuse. We will discuss the building blocks of AI from a product/design perspective, what trust is, how trust is gained, and maybe more importantly lost, and techniques you can use day-to-day to build trusted AI products. We will reference real examples from academia, industry, and my work at Philosophie.

Find the presentation here:

https://goo.gl/B2TQt6

Speakers
avatar for Chris Butler

Chris Butler

Chief Product Architect, IPsoft
Chris Butler is IPSoft's Chief Product Architect. Chris has over 19 years of product and business development experience at companies like Microsoft, KAYAK, and Waze. He first got introduced to AI through graph theory and genetic algorithms during his Computer Systems Engineering... Read More →


Tuesday October 16, 2018 3:00pm - 3:20pm
Horace Mann