Exceptional digital experiences are vital for success in the competitive online travel landscape, with highly engaged customers being 4 times more likely to refer brands to their friends, family and connections. With over 80% of holiday bookings happening online, improving digital customer experience is a vital objective for this industry. This session details the journey taken by TUI, which resulted in a model that enables TUI’s digital team to improve customer experience and predict revenue growth.
For companies that want to learn and iterate quickly there is no substitution for scalable experimentation practices — and as machine learning techniques have become increasingly democratized the burden of rigorous work is and should increasingly shift to how effectively models are tested and validated. In the pure online publisher space this problem is relatively easy, and A/B testing practices are common enough that much of this problem has largely been “solved” — but what about when A/B testing doesn’t work? Sometimes real world release mechanics or marketplace effects make traditional online experimentation practices impractical or, worse, can lead teams to conclude the wrong results.
This session will cover the importance of good experimentation practices, and some of the tools and techniques that Uber uses to run experimentation at scale.
Johannes will share insights into the recent advancements setting up operational analytics products at adidas, along with lessons learned along the way.
Most organisations recognise that data is useful, but they fail to appropriately measure and manage data across its lifecycle. This often stems from obscurity regarding the quantity, condition, storage and usage of data. Anmut has been working alongside Highways England on a groundbreaking new methodology for valuing their data assets. With this comes the ability to inventory, manage and make informed investment decisions. Data valuation has been spoken about for over 20 years, however now with the help of machine learning and predictive analytics, a valuation method has been cemented that is both rigorous and robust. At this talk, the importance of valuation will be discussed, along with how machine learning made the impossible, possible.
The first generation of Automated Machine Learning tools (from DataRobot, H20, Auger.AI and others) enables data scientists and business analysts to train with thousands of algorithm and hyperparameter combinations to generate the best possible predictive models. After uploading the data as a spreadsheet and waiting for training, the user selects the best model from the leaderboard and they are ready to do predictions.
Recently, several new AutoML products (from Google Cloud AutoML Tables, Microsoft Azure AutoML and Auger.AI’s open source A2ML API) have introduced the ability to automate the full AutoML process. Their APIs each support several phases in a pipeline: Importing Data, Training Against Algorithms and Hyperparameters, Evaluating Models, Deploying Models, Predicting Against New Data, and finally Reviewing the Performance of Models. These products all emphasize automated use by developers, not analysts uploading spreadsheets and viewing leaderboards.
Now that the full A2ML process can be automated new frontiers in exploiting AutoML’s capabilities are opened. Business logic in applications can be replaced by predictive models automatically generated from any data the developer has access to. Painstakingly coded sorting of results and lists of objects (accounts to manage, contacts to call, devices to maintain, trucks to route) can be ordered by a predictive model ranking. Complex cascades of if-then-else and switch statements (also known as business rules) derived from some “subject matter expert” or business person’s judgment can be replaced by the insights of a predictive model.
This use of AutoML has a far wider audience than just data scientists. Enterprise application developers can be far more productive and the amount of hard coded business logic in applications will steadily be reduced by use of predictive models. Software Has Eaten the World. With second generation AutoML, AI will now eat software.
This talk will describe in more detail just what second generation “Automated AutoML” entails. And describe several use cases where we have put this into effect for various applications and business problems.
A lot of companies are struggling with managing data science projects and get bogged into several unexpected difficulties. Newcomers on the data science field also face significant problems with hiring the specialists and organizing product teams efficiently. So, though, the data science is a well developed field, managing of data science projects is still quite challenging.
Nikita is Chief Data Officer in Russian Airline S7 with over 100 planes in fleet. Three years ago the airline company launched the first data science projects with only two data science project managers involved. Now they have a team of around 40 specialists in the fields of product management, data engineering, data science, software development and business analysis. Over these three years they have successfully implemented a few state-of-the-art machine learning products in some core areas of their business.
Nikita believes that their experience and ideas can help you and your organizations to launch and operate data science teams more effectively. In his talk he is going to cover the following areas of a data science management: roles and competences, organizational structure, product management frameworks, use of the Data Lake, Data Catalogue and Data-as-a-Service, discovery of new business cases and many other.
The pilot project – Big Data Analysis for HR efficiency improvement has been established as part of the development oriented strategy supporting ICTs as an enabler of development of data driven public administration in Slovenia. It has been run within Ministry of Public Administration of Republic Slovenia in collaboration with EMC Dell as an external partner. This pilot project has been launched aiming to learn what big data tool could enable in terms of the research of HR data of the ministry to improve efficiency. Therefore, anonymised internal data sources containing time management, HR database, Finance database and Public Procurement had been combined with external resources using postal codes of employees and weather data to identify potentials for improvement and possible patterns of behaviour. The results showed that there is considerable potential for improvement in the field of HR and lowering costs in the field of public procurement within the ministry. In this session Karmen will present the project.
Joost and Egge will jointly share experiences working with Data Science in Healthcare, sharing best practices from a large academic hospital (Erasmus MC) and an impactful Dutch Data Science scale-up in Healthcare (Pacmed), they will explain why unequal care is imminent and in fact desirable. The talk will cover lessons learned both in technical areas as well as process, focussing on balance between analytical rigour and explainability and leading to trusted results and enlarged adoption of innovative approaches. Joost and Egge will also share ideas on reshaping Healthcare (education) to invest it with the agility needed to face future disruptions.
Group Nine reaches nearly 45M Americans each day, totalling nearly 3 billion minutes of video engagement per month. Based on experience across multiple categories, Ashish and Juan Pablo will present a framework for identifying appropriate opportunities to deploy AI/ML/Data Science for video media analysis and discuss some of the technical choices they made (e.g. build vs. buy) in their decision making process. This is applicable to other industries. They will discuss the decisions they’ve made on what to automate and where to respect human creativity and processes, means of analysis to reach those decisions, as well as ways to promote implementation in organizations where human tasks are perceived as under threat. Finally, they will explain why they made the decision to stop short of full content automation.
A European shipping company was looking to gain a competitive advantage by leveraging ML techniques. The aim was to create shipping-lane specific demand forecasting, and to implement it throughout its operations, in order to save time and manual labor, adjust pricing and business agreements, and utilize smart resource allocation. Each percentage of improvement is worth $1.5 million.
In order to effectively operationalize a model you need to cross 3 chasms: 1) business relevance, 2) operationalizing models and 3) translating predictions to business impact. In this talk, Moran will explain these three elements using this real-world case study. He will highlight common mistakes when operationalizing a Machine Learning model in an enterprise environment and how to avoid them.
This talk is ideal for Data Scientists, Product Managers, Development Managers and other business stakeholders that work with Data Scientists.