September 15, 2023
by Jacob Caffrey / September 15, 2023
Data is changing the way the world works.
Across industries, businesses are rushing to implement data-based methodologies and practices.
Most recently, the boom of artificial intelligence has transformed how companies approach data analysis. At G2, we identified this growing need to implement data strategies and built out optimized solutions to help our customers gain an edge in the market.
This summer, I joined G2 as an intern on our data solutions team. Our team focuses on providing alternative data insights to more than 70 venture capital (VC), private equity (PE), hedge fund, and consulting firms to support their software investment strategy.
Alternative data refers to a type of data that is gathered outside of traditional sources. Stemming from G2’s main platform, our data solutions product is a strong resource for investment firms’ sourcing, diligence, and portfolio management efforts.
The intersection of data analytics and investing is fascinating to me, and I was given the freedom to jump into my own data project. Using Snowflake, a scalable data cloud software, I worked on one of our investor reports datasets.
While full of valuable information, this dataset's unstructured nature made it difficult to digest and create actionable insights. In my weeks working on the dataset, I was able to condense the data, quantify information, and create my own custom scoring system to provide a comparison metric across multiple products and timelines.
While I felt satisfied learning about the nuances of data cleaning and how to make insights more visible, I still wanted to understand what separated a good dataset from a bad one.
The Cambridge Dictionary defines a dataset as a collection of separate sets of information that are treated as a single unit by a computer.
It is easiest to imagine a dataset as a large table of cells, much like what you would see in a spreadsheet. Each cell would represent a data point, with correlating information from the row and column that contributes to the contents of that data point. Using this example, the dataset is the entire table of cells acting as a single unit.
Data can come in many shapes and forms. While G2 hosts large amounts of open data – data that can be accessed, used, and redistributed freely by everyone – we have multiple data products that reveal unique insights.
Commonly, our customers receive data via an AWS S3 bucket or through Snowflake. After uploading datasets into their system, customers can perform any type of data analysis that fits their needs. Data analysis can include building data visualization tools, creating complex algorithms to predict outcomes, or harnessing artificial intelligence to drive efficiency.
While it is becoming more and more prevalent today, data was not always a large part of business strategy. Until recently, companies were able to grow and thrive without the use of complex datasets. This begs the question: why are datasets so important?
Datasets can provide additional benefits to a business by addressing pain points, revealing unique insights, and providing signaling and automation in business operations.
Every business faces challenges, and a lack of information can often be a cause. Datasets that are built well address the lack of information that cannot be gleaned from traditional sources. An article from the Man Institute points out that with the emergence of alternative data sources, “users of this data can maintain their edge by using their modeling expertise and market knowledge to overcome holes and gaps in information available to investors.”
If a business is a person, data is like food and water - essential for survival. If your business’s body is aching, it is important to find data that can complement your high-level insights and fill in any gaps. But datasets don’t just have to fill in the gaps; they can also reveal entirely new perspectives when addressing a problem.
Gaining access to unique insights is nothing new in the business world. If everyone has access to the same information, it would be difficult to innovate and outperform competitors.
Harnessing alternative datasets is a growing means of acquiring this competitive advantage. With more information, businesses are exposed to new perspectives and are able to enrich their decision-making. Once they have painted the full picture by addressing their own pain points and expanding their market perspective, data can also be utilized to automate these practices.
Improving accuracy and efficiency is one of data’s greatest strengths. By identifying key data signals, businesses are able to refit their business strategy to align with data-backed KPIs. In doing this, businesses naturally create workflows that trigger automatic action when certain inflection points are reached.
Take a private investment firm, for example. Before modern data science, investment firms had to perform extensive sourcing and due diligence before deciding where to invest. With access to modern alternative datasets, many firms can simply upload their datasets into an aggregation tool and run complex modeling and algorithms to speed up their decision-making process. By doing so, businesses save money, improve accuracy, and control the quality of their processes.
While it may be tempting to create a dataset that has every piece of data available, it may not always be the most effective at creating value.
Data quantity is a straightforward concept and refers to how much information is available in a dataset. However, data quality is a more complex idea. While having strong data quality could mean a variety of things, Acceldata.io’s CEO Rohit Choudhary states that “aspiring to have reliable, accurate, and clean data should still always be a top priority.”
In other words, the value of datasets is not determined by the amount of coverage they offer but rather by their ability to provide actionable information to users.
When designing a dataset, you want your data to be reliable and accurate. At G2, we are able to directly connect our review data to software users who left those reviews. When a direct connection is established between data and reality, users trust that data as they are able to easily identify its source and context.
Accuracy does not necessarily mean perfection. Accuracy means that the dataset will not lead users astray when drawing conclusions; accuracy also implies that the dataset delivers value in its area of competency.
Our review dataset does claim to be a comprehensive representation of customer sentiment about a product, but it provides unbiased and validated reviews from real customers that can be used by software buyers, sellers, and investors. When the quality of your data is fundamentally sound, there will be value in your product.
This is not to say that having a large amount of data is a bad thing because it is not. Large quantities of data are valuable for enterprise projects or for addressing a wider range of use cases.
Furthermore, the large nature of the dataset nurtures heightened creativity within the data analysis process and provides more opportunities to gather unique information.
To make the business case, data vendors are often able to sell their data products at a higher price point if there is more information in the dataset. On the other hand, vendors will not be able to sell the product at all if they do not carefully ensure that the quantity does not compromise the quality.
While understanding the value of datasets can open the floodgates of imagination and innovation, there are still prevalent challenges that come with building datasets. Identifying and addressing these challenges head-on is important to the long-term success of a dataset
Two common challenges that datasets face are a lack of obvious competitive advantage and weak dataset foundations that inhibit scalability.
The first challenge is creating a dataset that reveals unique information in a more effective way than other sources of data on the market. Building and selling datasets is much like any other product: you want it to be more valuable than its competitors.
At the end of the day, data buyers have limited budgets and limited bandwidth to procure and analyze data. To gain a competitive advantage, dataset providers must consider a lower price point, a greater variety of data, and create actionable insights.
While it is true that more data is often better, it is important that dataset builders understand where their dataset fits into a greater data strategy to avoid this challenge.
Creating strong dataset foundations is another challenge that often gets overlooked when creating data products.
By dataset foundations, I am referring to the type of data gathered, the manner in which it is gathered, and the format in which it is presented. Lacking strong dataset foundations can lead to poor data quality, implementation challenges, and hinder scalability.
In fact, according to a report published by EY, “Some estimates put the cost of remediating a data quality error at ten times the cost of preventing it in the first place, and, by the time bad data causes strategic decisions to fail, the cost can balloon to 100 times.” Oftentimes, data providers are extremely focused on the product and opportunity that a dataset provides and can be blinded to the diligence that must be done in order to prepare for the future.
Once datasets continue to add information, they must be able to still be applicable down the road. Failure to address these challenges, as EY alludes to, will lead to both financial and opportunity costs.
Now that you have a rundown on the importance of datasets, how to ensure your datasets prioritize quality over quantity, and some common pitfalls when crafting datasets, here are my two biggest tips to make sure you implement these ideas the next time you are working with a dataset.
In the shoes of a data buyer, you should be able to envision the use cases that the dataset will address. In the shoes of your sales team, imagine yourself selling the value of the dataset. In the shoes of the product team, you should be able to see the long-term growth and development of the dataset.
Viewing your product with different intentions and goals reveals other perspectives that highlight hidden strengths and weaknesses. If you are able to recognize the value of each stakeholder, your dataset has a good starting point.
If you are capable of teaching what each data point means and why it is useful, you build credibility in the dataset and can also ensure that it is digestible for users. If you are unable to effectively explain what a data point is and why it is included, that might be an indication that you have included too much information.
Remember that you should never let the quantity of data diminish its quality.
Innovations in the data world are moving quickly. Being able to identify and implement the latest trends in data will help your product get a leg up. Staying up to date on the latest trends will help identify further use cases, address challenges, and prepare your dataset for the future.
Even if you are unable to fit in the newest innovation or the latest model, being aware of how the industry is shifting will help you shape your data strategy so that it has long-term value.
In my time working with our investor reports dataset, I have encountered both the good and the bad of working with datasets.
Data can improve efficiency and generate more calculated outcomes when dealing with a problem. Data can also cause systematic inaccuracies and an overreliance on a product that has no ability to evolve.
Wondering how data can better serve your datasets? Learn more about data cleaning and why it’s essential to prioritize data quality.
Jacob Caffrey was a Data Solutions Intern at G2 for the Summer of 2023. He is currently pursuing a Business Administration degree at the University of Michigan.
I recently reflected on my personal highs and lows of the last year, and it made me think...
Machine learning models are as good as the data they're trained on.
Businesses spend a lot of time, revenue and manpower on collating raw data.
I recently reflected on my personal highs and lows of the last year, and it made me think...
Machine learning models are as good as the data they're trained on.