Unfortunately, companies struggle to convert the increased investment in data science talent and projects into value. Recently, Gartner estimated the 85% of big data science projects fail!
In fact, this struggle is not new. For example, in 2015, MIT reported that “While businesses are hiring more data scientists than ever, many companies are struggling to realize the full organizational and financial benefits from investing in data analytics” (Stein, 2015).
Similarly, a 2014 Capgemini study likewise found that:
- “Only 27% of big data projects are regarded as successful”
- “Only 13% of organizations have achieved full-scale production for their Big Data implementations”
- “Only 8% of the big data projects are regarded as VERY successful”
Project failures stem from both technical and non-technical issues. An Infochimps survey attributes inaccurate scope (58% of respondents) as the most common reason for big data project failure. Domino Data Lab blames “gaps in process and organizational structure, and inadequate technology” as the primary culprits (Domino Data Lab, 2017).
David Becker clustered commentaries on big data project failures in a 2017 research paper. We further categorized these into technology-driven failures (in gray) and project management and organizational issue-driven failures (in red). 62% of the failures were due to these latter issues, not technical issues.
A similar analysis of the Capgemini survey presents a mix of technical and project and organizational challenges that hinder success.
Ad Hoc and Software Engineering Project Management
Without established and clear methodologies for data science project management, organizations often resort to ad hoc project management or manage data science as software engineering. Both of these have their limitations which can also compromise the chances of project success.