< Back to Blog

Ness Panel Session: Maximizing Revenue by Unleashing the Latent Value of Your Data

maximize value of your data

I recently chaired two panel sessions on a hot topic that cuts across all industries and sectors: the core role that data plays in creating the seamless digital experience consumers demand. Whether it’s analytic insights, machine learning engines that can chat with users or a personalized online experience for consumers – it all starts with clean, usable data to maximize the value of your data.

Almost every company is trying to monetize their data, by using it to improve the efficiency of their internal processes, increasing revenue from their customers, or flat-out selling their data to third parties. No wonder we are seeing an “arms race” to acquire data, with splashy acquisitions like Microsoft buying LinkedIn and IBM acquiring weather.com’s climate information.

The New York session featured Bruce Kratz, CTO of Sparta Systems, which produces quality and compliance management systems for the life sciences industry. The Los Angeles session featured Mark Berner, VP of Engineering for TiVo, whose main product today is not Digital Video Recorder hardware but rather software and services that power entertainment experiences. At first glance, these seem like very different universes. But, when they talk about data, their plans, impediments and advice are strikingly similar.

Some examples: For both TiVo and Sparta, data (rather than some tangible object) is one of the main products they provide to their customers. Both companies leverage aggregate data from devices or system transactions to extract insights and to continually improve the overall product experience. Both companies are extremely sensitive to concerns around data security and privacy and exercise strict requirements to anonymize personal data.

The audience in both venues came from a diverse set of industries, including banking and financial services, media, government and retail. They chimed in with a number of fascinating use cases all centered around data, from Know Your Customer (KYC) regulatory requirements to General Data Protection Regulation (GDPR) to data-driven insurance offerings.

Another strong message from all participants: Machine Learning is real, not hype. It can bring benefit to all stages of the product lifecycle, from automating the cleansing of dirty data as it is ingested, to determining the optimal order in which to run regression tests so they will fail faster, to providing chatbot interfaces to reduce the cost of communicating with customers.

When the discussion turned to impediments, the level of commonality was even more striking:

  • Many companies have a poor understanding of corporate data assets, due to legacy systems and inorganic growth. Answering a simple question like which table is the source for a given data value that appears in multiple tables can require months of sleuthing by business analysts.
  • Enterprises across the board are having trouble integrating data from different sources into a common data lake from which insights can be extracted. The source data often suffers from major quality issues like missing or invalid values. When the data is of poor quality, so are the insights derived from that data. There are often multiple systems that can define the value of the same piece of data, with no capability to detect, reconcile or distribute a single consolidated value. There is often no way to share information about clean data that has already been harvested, so the same cleansing work is done over and over by different departments.
  • Many companies suffer from a skills gap. Good Big Data architects and engineers are hard to find and keep, especially in tech centers like Silicon Valley or New York City. When it comes to Machine Learning, there are even more companies pursuing even less talent.

If you are feeling discouraged by this bleak picture, cheer up: Our panelists and our audience had some excellent advice and recommendations that transcend specific industries:

  • Benchmark against your competition, to calibrate what is considered “table stakes” in your industry.
  • Make sure you have a solid use case that provides tangible value to the business, and a solid sponsor from the business side. Otherwise, your Big Data initiative will be perceived as technology in search of a problem.
  • Look for a quick win, by implementing a Minimum Viable Product (MVP). Otherwise the business side of the house will rapidly lose patience.
  • Decide your company’s core competency and focus on that. For all other “plumbing” issues like Big Data architecture, get help from a partner with experience in Big Data and Machine Learning across your industry and other industries.

Tolstoy once wrote, “Happy families are all alike; every unhappy family is unhappy in its own way.” Companies that are struggling to monetizing their data will be surprised to discover that their unhappiness is shared by many other companies in many other industries, and they are alike for many of the same reasons. Fortunately, “unhappy” companies can take advantage of many common strategies used by more successful companies to achieve success in unleashing the latent value of their data.