Predictive Analytics World London
etc.venues, 200 Aldersgate, 17-18 October, 2018
Be the first to know when the full agenda is published. Save money and never miss an Early Bird Deadline again.
(the sessions are not displayed in order – the exact timing will be available once the full agenda is published)
Data science, if judged as a separate science, exceeds its sisters in truth, breadth, and utility. DS finds truth better than any other science; the crisis in replicability of results in the sciences today is largely due to bad data analysis, performed by amateurs. As for breadth, a data scientist can contribute mightily to a new field with only minor cooperation from a domain expert, whereas the reverse is not so easy. And for utility, data science can fit empirical behavior to provide a useful model where good theory doesn’t yet exist. That is, it can predict “what” is likely even when “why” is beyond reach. But only if we do it right! The most vital data scientist skill is recognizing analytic hazards. With that, we become indispensable.
In the past 15 years or so technology companies like Google and Facebook have been applying predictive analytics to digital marketing for producing targeting and optimisation solutions. Today with the growth of scaled cloud platforms and machine learning technologies, it has become possible for brands to build their own solutions to these and other marketing challenges. In this session we will explore how machine learning methods such as Markov chains and boosted and ensemble classifier methods can be used to predict and optimise the marketing mix, and some of the key technical challenges involved. Gabriel will further show how such predictive models can be used to identify and target customer prospects and with additional modelling, automate digital buying solutions. The presentation will also cover organisation challenges such as the need to foster collaboration between data scientists and marketers in order to get to actionability, and the practical steps you needed to apply these technologies to your own organisation.
Expectations on customer experience are being reset every day by digital native companies. Google, Facebook, Uber are leveraging real time customer data and providing relevant and personalised experience at every touchpoint. When it comes to customer experience we can no longer benchmark ourselves with the peers in our own industry as digital has expanded the peer set by blurring industry boundaries. Financial services is no different. The customer’s financial journey is becoming increasingly complex with multitude of financial services entering the market every day. It has become even more critical to provide relevant and personalised customer experience in this space. The case studies in this talk will highlight some possibilities (and recent examples) in making the customer touch points more personalised and engaging through predictive analytics leveraging transaction, credit bureau and digital data.
BOTS is one of the AI Buzzwords du Jour. But what is behind the hype? In this presentation, the major components of a bot will be identified then a practical example will be created using public data and open source software. A highlight will be the use of machine learning combined with various forms of active learning to not only create an initial ontology but then to help make the bot automatically “smarter” over time. The example is done so that it can be shared and extended to a variety of topics.
At The Washington Post, the comment community is important for the newsroom. To stimulate meaningful conversations while maintaining a civil and thoughtful comment section is the aim. Historically, comment moderation was handled by humans. However, with the rapidly growing volume of online comments, this manual process consumes a large amount of resources. To scale comment moderation, the Washington Post developed ModBot. ModBot contains a set of predictive models trained on tens of thousands of comments with human moderated labels. The problem of comments moderation is considered as a classification task. Sam will explain how he and his team build the models, refine the models based on their moderation criteria, and then deploy ModBot in the production environment for more efficient and economical comments moderation. Also, he will discuss the challenges they encountered, and solutions for addressing the challenges.
Transfer learning is a deep learning technique that uses pre-trained networks as starting points for training domain specific classifiers. This allows for virtually out-of-the-box building of powerful baseline deep learning models for virtually any domain, from medical images like X-ray images to industrial optical images or satellite imagery. This can be further generalised to non-image datasets like IoT, by emphasising its multichannel 1D images properties. George and Miguel will show how at Microsoft they use GPU enabled Deep Learning Virtual Machines to show how engineers can leverage open-source deep learning frameworks like Keras to build end to end intelligent signal classification solutions.
In this presentation, Hector will cover some recent advances in applying machine learning to the field of healthcare. A brief overview of deep learning and its applications in healthcare such as diagnostics, care management, decision support and personalized medicine. There will be deeper dives into specific topics such as machine learning on electronic health records and analysing EEGs.
In software development, one of most often underestimated challenges is technical debt. In machine learning systems this adds up with other things to consider. In this talk you will learn about most popular potential pitfalls with real life examples and explanations, such as:
– handling concept drift
– being aware of feedback loops
– living with correction cascades
and other challenges.
Graph-based machine learning is becoming a very important trend in Artificial Intelligence, transcending a lot of other techniques. The worlds largest companies are promoting this trend. For instance Google Expander’s platform combines semi-supervised machine learning with large-scale graph-based learning. Using graphs as basic representation of data for machine learning purposes has several advantages: (i) the data is already modelled for further analysis, explicitly representing connections and relationships between things and concepts; (ii) graphs can easily combine multiple sources into a single graph representation and learn over them, creating Knowledge Graphs; (iii) a lot of machine learning algorithms exploit graphs for improving computation performances and results quality. In this Deep Dive, Alessandro will show the advantages above presenting also some applications like recommendation engine and natural language processing that use machine learning over graph. Concrete scenarios, models and end-to-end infrastructure will be discussed
Reinforcement learning is gaining attraction in the AI field as it extends to predictive analytics to a fully autonomous and optimized control of complex systems. It has a lot of applications in robotics, autonomous car driving, industrial control and even gaming. This talk derives the main theoretical concepts, applicable algorithms and simulation environments for agent development. There will be also a discussion about still existing challenges and limitations of this concept.