Using human centered design can reduce the risk of failure associated with advanced analytics projects and helps to drive innovation for employees and customers
Firms are at risk of wasting billions on failed analytics and data science projects over the next 3 years:
Over the past decade the failure rates on analytics projects, particularly advanced analytics projects have been startlingly poor. Business adoption of data and analytics continues to be an issue and firms grossly underestimate the failure rate of these projects
· The vast majority (77%) of businesses report that adoption of big data and AI initiatives are a challenge often because they’re designed in silos, tackle the wrong use cases, and business leaders don’t think they’ll deliver value
· Unsurprisingly, 85% of data projects end up failing and do not move past the preliminary stages let alone transform business processes, functions, or end experiences
· Despite high failure rates, firms are looking to increase their investment in data and analytics, with 92% saying they will increase their spend over the next year
The pace of acceleration might not be alarming, if the level of spend on these initiatives was small and growing rapidly. However, firms spent close to $37.5 billion dollars on AI systems in 2019. IDC estimates that number to reach over $97.9 billion by 2023. If the rate of success on these projects does not improve, that may be as much as $90 billion dollars spent on initiatives that fail.
Human Centered Design can help us to better design projects to overcome these barriers:
Over 3 years ago we pushed our analytics and data science teams to approach all their projects using design thinking and making human centered design a cornerstone of how we approach all of our projects — end to end. This has resulted in an 90% success rate on projects moving past the POC phase AND having those projects integrated into the day to day of our businesses.
We approach all projects in 3 phases:
Phase 1- Inspiration and Empathy:
Before we start any significant analysis or touch a single piece of data we start with user/ stakeholder research and background research. Our goal is to truly understand the needs and pain points of the space we’re looking into.
If we were tackling forecasting for our finance team, our goal is to understand what KPIs to model and understand how the team does the process today, what’s easy, what’s hard, and how they could potentially use our solution in the future. It’s all about empathy for understanding the process today and thinking about how we can design solutions to improve experiences for customers and employees in the future
We want to understand not only what models to use, but also what the solution itself looks like. Will the team need to do simulation modeling or what if analysis, how will they use explanations from the model, and how do we showcase model results that engenders the most positive experience for our employees or customers?
Phase 2 — Ideation and Exploration:
This phase begins with hypotheses based on the work done in Phase 1. We then proceed with Exploratory Data Analysis to help answer some of those initial hypotheses, help us to better understand the domain, and guide our initial thinking on feature engineering.
We then brainstorm what a solution looks like, what are the key requirements, and how would this fit into the workflow of our employees or the journey of our customers both as a prototype/MVP but also in the future vision for our solution.
It is critical to create prototypes based on the feedback from these sessions and get those prototypes into the hands of stakeholders, subject matter experts from the business, and end users/ customers. When building a recommendation engine, we started with hi fidelity wireframes and mockups of the experience, for forecasting models it is often dashboards as well as the parameters for any what if analyses we should include, and for marketing mix models our prototypes show how the models would enable decision making for investments across all channels.
How these get built varies. Typically, we start with whiteboard sketches, move to tools like Figma or Sketch, and then build out a higher fidelity versions either using python libraries/ R shiny or in React . These don’t need to be perfect and ready to industrialize. The focus should be things that can be built quickly to show how the solution will work and then get the minimum version needed to integrate with our models and pilot with our business units.
Finally, there’s the development of our models themselves. We start broadly on model and feature selection and aim to down-select 2–5 models for extensive tuning and feature engineering. Our goal is to balance out more transparent and explainable models vs overall model performance (accuracy, cost to implement etc..). Depending on the difference in performance between our models, we may not be able to utilize transparent models. This then leads us to spending time on explaining our models using techniques such as LIME or Shapley Values for our best performing models.
There’s no one size fits all approach, but typically we find we’ll spend as much if not more time on ensuring our models and features are explainable than we will on actually developing and selecting our models.
Phase 3 — Implementation and Experimentation:
Often our longest phase. The goal of this phase is to work with our business and stakeholders to introduce and evaluate our prototype into the real world. This requires working with the business to develop the parameters of our experiment design, understand what a limited release would look like (where and among what groups), and then evaluating the prototype solution vs existing benchmarks or a control group.
Based on the results we build or refine our business case for our solution vs the roadmap and effort needed to industrialize and expand the solution. Then we either go through expedited versions of phase 1 and phase 2 for the broader release or we expand the MVP for further evaluation while building our next version of the solution.
This phase also almost always requires a re-think of business processes and incentives. A properly designed evaluation will also provide feedback on how we may need change how the team needs to work or how their current incentive framework may be at odds with how we would want our solution to work.
What this looks like in practice:
The process above depicts our actual project plan for a large Advanced Analytics MVP (think building a recommender system). The process looks very much like an “agile” approach but should fit in any solution development process. Our key goals are to ensure that:
· We’re delivering some form of insights deliverable, or prototype for feedback every two weeks
· No project exceeds 16 weeks before going into phase 3, with an ideal goal of getting to phase 3 in 8–12 weeks, if not sooner
· We have at least 2 check ins with our “steering committee” for feedback and assistance with any key decisions that may need to get made
This ensures that we never go too long without getting broader feedback on our solution and keeps us honest on how we scope and develop our solution. Our goal is to get to something good enough to pilot with our business partners within 1 quarter.
While no process is perfect, we’ve found the approach above to yield a great deal of success in leading our Advanced Analytics solutions to be adopted by our business partners and to actually drive change. We hope that a larger number of organizations will find success using design led approaches to developing analytics projects, because sustained high rates of failure are a detriment to all analytics professionals.
 New Vantage Partners, Big Data and AI Executive Survey 2019