The promise of advanced analytics is tantalizing for many companies, particularly for their marketing organizations. According to eMarketer/Forbes, nearly two-thirds of marketers embrace big data as a way to enhance the customer experience and improve marketing strategy. Predictive modeling is touted as the best way to utilize detailed, high volume transaction data to dissect consumer intent signals and purchase patterns. While big data won’t solve all of a business’s problems, opportunities exist to enhance loyalty programs, advance targeting capabilities, and better utilize customer feedback.
However, while advanced analytics does indeed hold such promise, few are discussing big data’s dirty little secret. Big data is in deplorable shape. It’s not stored correctly, it’s not protected correctly, it’s not timely or recent enough, it’s incomplete, and it may not even be accurate.
The promise of predictive analytics
While 90%+ of businesses say they are interested in deploying predictive analytics for their business, only 25% have had successful executions, meaning they have fully implemented modeling efforts and are using them to inform certain business functions such as inventory control, marketing communications, and human resource management. (Source: eMarketer). This deployment of predictive analytics allows them to measure and optimize their business operations, particularly for customer intelligence.
There is more data available than ever before, coming from loyalty programs and customer transactions and interactions, capable of informing programmatic advertising, online consumer behaviors and geographical patterns. In the hands of a capable statistician, this data can be molded into insights that can be translated into one-to-one marketing communications, leading to relationships with a company’s best clients that will stand the test of time. It can also lead to improved impact measurement for marketing programs, yet the majority of marketers have not fully realized the benefits of their own data.
Key stumbling blocks
There are at least three major stumbling blocks to implementing an effective big data strategy even before an operational model is built, or before one machine learning algorithm is run.
- Data may not be a sufficient focus of the organization. If it is seen as a byproduct instead of an asset, there is a good chance it isn’t getting the attention it needs.
- There may be no data governance policy in place. Data governance incorporates process management, security, privacy, and regulation compliance.
- Personnel may be poorly equipped to handle data, maintaining its integrity and quality from collection to storage. Frontline employees may be manually entering data in an inconsistent manner, and other departments may be siloing their data, housing it in different systems that don’t talk to each other.
The problem of data inadequacy
Maybe the data is considered “good enough.” No organization believes their data is perfect, after all. But what happens when less than adequate data is modeled?
While predictive analytics disciples would like to believe that math drives all modeling output, the reality is that algorithms are created by humans. They come with their own biases and knowledge limits. Every model is nearly as much art as it is science. If the data is inadequate – inconsistent, incomplete, duplicated, outdated, or just inaccurate – the model fueled by this data will be imprecise to a much greater degree.
Simple steps to improve data
Implementing an effective big data strategy requires a multi-year commitment from even the most sophisticated organizations, and the task of organizing and standardizing the data itself can take years. But there are a few things companies can do immediately to improve their chances for success in a data-driven world.
- First, make sure that top management is driving the focus on data. Without clear consensus and support from leadership, resources cannot be allocated to improving the data itself.
- To improve data governance, start with front line employees. Spelling mistakes, typos, skipped fields—creating menus for data entry can help alleviate these types of mistakes. Real-time verification processes will eliminate data errors at the source.
- To ensure reliable data availability, implement a strategy for storing and accessing data in a consistent manner. This may mean implementing a large database that other internal systems feed into on a regular (weekly, daily, or real-time) basis.
- Be sure to take the time to recruit and empower the talent needed to manage this data. Strong coders with domain knowledge are critical to keeping data organized and updated.
- Finally, consider what needs to be measured and start gathering additional potentially valuable data now. Opportunities exist across the organization to improve sales efforts, talent retention, and website management. Consider data quality from the beginning with these efforts.
Longer term it will be necessary to append other information to the data, ensuring your customers can be reached by phone, email, and mailing address. Demographic data can be important for customer segmentation and understanding product choice. Data partnerships can drive greater insights and usability.
Savvy marketers with the appropriate focus on data quality should expect to see models that improve business within a year of implementing data quality initiatives. Ensuring quality inputs to business driving mathematical models will lead to positive ROI in less than two years. Ultimately, it’s the commitment to quality data that drives better business results.