Nice to meet you.

Enter your email to receive our weekly G2 Tea newsletter with the hottest marketing news, trends, and expert opinions.

The DeepSeek Disruption: A Wake-Up Call for Big Tech?

February 5, 2025

deepseek ai disruption tim sanders

The AI landscape just got more interesting. 

In a move that's shaking up the industry, DeepSeek has achieved what tech giants spent billions trying to perfect — an AI model that runs at 1/10th of the cost. 

With this event causing NVIDIA's stock to take a hit and OpenAI facing its first serious challenge, one question looms large: are we witnessing the democratization of AI, or is there more to this story than meets the eye?

Before you rush to download their open-source model or dismiss it as just another competitor, it’s important to understand the implications of this game-changing development. In my recent interaction with Tim Sanders, VP of Research Insights at G2, he unpacks what this shift means for the industry, its potential impact, and more.


This interview is part of G2’s Q&A series. For more content like this, subscribe to G2 Tea, a newsletter with SaaS-y news and entertainment.

Inside the AI industry with Tim Sanders

DeepSeek — everyone’s talking about it. What’s your take on it? Should U.S.-based companies like OpenAI be worried?

The emergence of DeepSeek's R1-v3 reasoning model represents a potential paradigm shift in AI development. What makes this fascinating is how it challenges our assumptions about the necessary scale and cost of advanced AI models. 

I started following DeepSeek in December, watching their progression across model iterations. While the model gained significant attention at Davos, it wasn't until recent developments that its full implications became clear.

Two critical aspects stand out.

First, DeepSeek's approach potentially exposes what Clayton Christensen would call "overshoot" in current large language models (LLM) from companies like OpenAI, Anthropic, and Google. In his book "Innovator's Dilemma," Clayton Christensen describes how market leaders sometimes develop solutions that are almost too sophisticated and expensive, creating vulnerability to disruption from below. Think of how YouTube disrupted traditional television — while initially offering lower-quality content, its accessibility and zero cost to consumers revolutionized video consumption.

The second and more significant innovation was that DeepSeek figured out a way to run it cheaper. This means that inference, which is the tool’s ability to complete predictions when you put a prompt in, is 90% cheaper. And because they're open source, data scientists worldwide can download it and test for themselves, and they're all saying it's 10 times more efficient than what we had in the past.

All of this is interesting because the entire premise of an arms race for AI, with NVIDIA providing high-end GPUs and all the hyperscalers building massive data centers, is that you would need huge amounts of computing power because of the inefficiency of LLM inference.  But DeepSeek’s affordable innovation shows you don’t. As a result, you've seen stocks like NVIDIA, companies that bet on high-cost infrastructure, take a big hit. 

However, this doesn't necessarily spell doom for established players. Open AI, fortunately for them, is private, but it's surely shown a threat to them. That being said, I believe there's room for both. I believe that OpenAI is still the best solution. Their latest O3 model demonstrates continued innovation, with features like Deep Research (available to $200 pro subscribers) showing impressive capabilities. 

Rather than complete displacement, we're likely seeing market expansion. DeepSeek definitely opens up possibilities for users seeking more affordable, efficient solutions while premium services maintain their value proposition.

Sign up for G2 Tea.

Marketing news brewed fresh every week just for you. Subscribe here

So, DeepSeek is 90% cheaper, and they have proven that AI advancements can be made at a significantly lower cost. This sounds great, but are there any implications?

What's fascinating about this is that when people talk about DeepSeek achieving advances at lower costs, we need to understand what that means exactly. The cost reduction is real, but the implications aren't as straightforward as they might seem.

First, when we hear comparisons between DeepSeek and platforms like OpenAI, we're actually looking at a very narrow set of use cases — mainly science, coding, and some mathematical challenges. This difference is crucial to understand because it shapes what these cost savings actually mean in practice.

Let me give you a concrete example from my own experience. For research and writing tasks, DeepSeek's R1 has shown an 83% hallucination rate. That's staggering when you compare it to the established platforms that maintain hallucination rates below 10%. So yes, it's cheaper, but there's a clear quality trade-off.

Think about it like this: if you consider a language model to have different "experts" within it, OpenAI's models have hundreds of experts across various fields. Meanwhile, DeepSeek has managed to optimize for only a handful of specific domains. 

“Cost efficiency has been achieved not by generalizing but by specializing in specific domains.”

Tim Sanders
VP of Research Insights at G2

The second issue is it's not enterprise-grade because it's not secure. The cost savings become almost irrelevant when you factor in security concerns.

I've had numerous conversations with chief information security officers who've clarified that they wouldn't touch the web browser version of DeepSeek due to data security concerns, particularly regarding potential exposure to the People's Republic of China. Even the free, open-source model raises red flags due to potential backdoor coding risks. So DeepSeek is a small business entrepreneurial tool for now because this security quality is quite suspect at the moment.

Talking about your personal experience, have you used DeepSeek? How does it differ from other tools, and how do you think it will be used primarily?

I have not put DeepSeek on any of my computers. Due to company policies and personal security concerns, I haven't installed the open-source version on my computers. However, the mobile experience did reveal something interesting. DeepSeek's human-like interaction quality is remarkable. The way it mimics human conversation patterns is quite impressive. 

Human mimicry is one of the things that these LLMs do that is really interesting, and it makes you feel like you're talking to a person. So the answer to your question is, yes, I tried the app version on my phone. No, I have not downloaded the open source. 

That being said, I have sat on demos over the weekend with a very reputable group of academic data scientists where they have done it, and that's where I found that the hallucination rate for the use cases I care about the most is unacceptably high for me actually to use, even if I believed it was secure. This is why, for serious projects, like an upcoming G2 initiative where we need reliable reasoning models for buyer insights, we're sticking with enterprise-grade solutions, likely from OpenAI.

I think DeepSeek's primary use case will emerge in scenarios where cost-efficiency trumps absolute accuracy and users are comfortable with the security trade-offs.

For businesses trying to stay ahead of AI updates, what should they make of DeepSeek, and what should they watch out for as more competition enters the scene? 

Well, there are three things I want to think about here. 

Number one, let's get back to this idea of “overshoot versus undershoot.” Companies should ask themselves, “Are we too expensive? Is our solution too good?” This means that they are giving even more functionality than the users want. Because if they are, they could be disrupted, like Open AI and NVIDIA have been disrupted by DeepSeek, by two people working in an apartment willing to do something free, that's not quite as good. So companies should be concerned, whoever they are, that they might be an overshoot.

The second thing that you can take away from it is the power of first principles. When we talk about why DeepSeek accomplished what it did, I'm just focusing on the inference of their ability to run it 90% cheaper. They went back to first principles. 

In other words, they started from the beginning and said, “I don't care about the best practices in language models. Let's start over from the beginning, and let's ask ourselves if a model really needs to be overbuilt like this. It does not.”

 So they were much more right-sizing instead of overbuilding. Then they asked if there was a way for them to optimize the computation in a way that risks less quality but generates more results. 

“Instead of having a singular expert work on the reasoning, they had a group of experts with different skill sets who swarmed together, optimizing the computing power. That was a revolutionary idea. ”

Tim Sanders
VP of Research Insights at G2

Those were first principles, like SpaceX. Elon Musk's startup was like SpaceX's thinking — why don't we reuse boosters? A booster, you know, the compartment from rockets, why don’t we reuse them instead of just letting them fall into the ocean? 

NASA would have told you that you can never catch and reuse them. Well, they did, and it's dramatically lowered the cost of going to space. So, first principles mean you and your team should never believe what the experts say is impossible. You should be willing to try anything. And that's the second idea. 

The final idea is to start thinking a lot more about small language models. You should think even more about owning your model and not being dependent on one of these major platform models that could change the rules for you. So, the idea is that language models could offer a good enough solution, be small, and be hosted on your laptop. That's real. I've been covering this since 2022, and I've always believed LLMs may be too good.

You've witnessed various tech transformations throughout your career. How does this current AI revolution differ from previous technological shifts? What are your predictions for the next year?

For decades, the growth of AI was stunted by its reliance on limited academic funding, which often hindered sustained innovation. They don't have the capital to pour into the innovation pipeline. That's changed in the last few years. I believe that the rise of ChatGPT and the hundreds of billions of dollars, if not trillions, that will be spent against that innovation has created a capitalization bonanza. That means that the rate of innovation is going to speed up.

The traditional Gartner Hype Cycle, which predicts a “trough of disillusionment” in technological adoption, seems less applicable in today's AI landscape. Instead, continuous improvements are the new norm, suggesting that what we perceive as cutting-edge AI today will soon become baseline technology.

“I think the game has changed, and this is the worst AI you'll ever have. It's going to get remarkably better every other month for the rest of our lives.”

Tim Sanders
VP of Research Insights at G2

I think leaders should feel a profound sense of urgency to develop theoretical and applied knowledge. When it comes to AI, you need to read about it. You need to put your hands on it. You need to test it. Don't delegate it. And whatever you do, don't wait on it as a phenomenon. So that's my biggest takeaway about what's different from then to now. 

One of the critical evolutions in AI is the separation of prediction from judgment. Now, the machine can make a laser-accurate prediction if you use the right solution, and the human beings pass the judgment to put it into production. 

I believe that over the next few years, we're going to see less and less human in the loop. Humans in the loop have been talked about for the last few years as a safety, a safeguard, something that's going to keep it working wonderfully. I believe humans in the loop is a problem more than a solution. It's a drag, a friction, on the actual productivity of AI. 

The AI landscape is evolving as new areas of innovation emerge, such as AI orchestration and synthetic data generation. I believe these are a breakout category as they are set to transform industries by seamlessly integrating AI into business operations and modeling market behavior. We're going to see that in the next year at G2 because there are so many moving parts in AI; being able to orchestrate all of them and align them to a company's model decision, its data architecture decision, and its business concept decisions, that's going to be a game changer. 

I am watching organizations like UI Path at the forefront, transitioning from robotic process automation to orchestrating AI capabilities. Keep an eye out for that one because it will be a big deal later this year. 

Lastly, keep your eye on video content. The idea of creating compelling videos with text prompts is only going to get better and better. I see a great shift happening by the end of the year, where it no longer looks creepy and weird and actually becomes a formidable competitor to shooting and editing videos to promote products.


Follow Tim Sanders on LinkedIn to keep yourself updated about what's happening in the AI space. 

If you enjoyed this insightful conversation, subscribe to G2 Tea for the latest tech and marketing thought leadership.


Edited by Supanna Das


Want more articles like this?

Subscribe to G2 Tea and get the latest marketing news and trends delivered straight to your inbox.

Get this exclusive AI content editing guide.

By downloading this guide, you are also subscribing to the weekly G2 Tea newsletter to receive marketing news and trends. You can learn more about G2's privacy policy here.