Main image of article DeepSeek and Cybersecurity: What Tech Pros Should Know

Depending on where you sat on Jan. 27, the release of DeepSeek was either the next big step in artificial intelligence (AI) development or a harsh reality check for some of the biggest names in tech.

On that Monday, Chinese startup DeepSeek officially released its new model (dubbed R1), designed to rival OpenAI’s o1 and ultimately compete with ChatGPT and other generative AI chatbots from Meta, Google, and Microsoft. Unlike many of its competitors, DeekSeek is built on open-source code and costs a fraction of what U.S. firms spend to train their large language models (LLMs), according to multiple reports and the company’s announcement.

The news sent high-tech stocks like Nvidia and Broadcom sinking (the market would bounce back a few days later) and the backers of more expensive, U.S.-based AI technologies scrambling for answers to how a relatively unknown startup could produce a platform that costs a fraction of competing products and then open-source the code for others to use. 

At the same time, some businesses interested in investing in generative AI models see DeepSeek as an interesting alternative, a way to save money while gaining the same benefits offered by more expensive technology, according to the Wall Street Journal.

As part of its release, DeepSeek rose past ChatGPT in the Apple App Store – creating even more interest. 

“DeepSeek's achievement in AI efficiency—leveraging a clever reinforcement learning-based multi-stage training approach, rather than the current trend of using larger datasets for bigger models—signals a future where AI is accessible beyond the billionaire classes,” said Andrew Bolster, senior R&D manager at security firm Black Duck. “Open-source AI, with its transparency and collective development, often outpaces closed source alternatives in terms of adaptability and trust. As more organizations recognize these benefits, we could indeed see a significant shift towards open-source AI, driving a new era of technological advancement.”

Amidst the arguments over DeepSeek, there’s another issue surrounding the release that is gaining traction: AI security.

After the DeepSeek announcement, the company was forced to limit signups for its services following malicious activity. DeepSeek is trying to implement fixes and continues to monitor activity on its sites.

Since the release of ChatGPT in November 2022, cybersecurity experts have raised concerns about AI, including attacks that target the technology as well as those groups using virtual chatbots and other AI services for malicious purposes. Threat actors have targeted OpenAI itself, and the Journal recently detailed how Iranian and Chinese groups are using Google Gemini as a virtual assistant to speed up operational processes.

“Given the rapid deployment of [the DeepSeek] platform, there's a real possibility that opportunistic cybercriminals identified and exploited potential vulnerabilities that more established platforms have had time to address,” Toby Lewis, global head of threat analysis at Darktrace, told Dice. “This incident serves as another reminder that security cannot be an afterthought—it must be woven into the very foundations of these systems from the outset. As AI platforms continue to scale rapidly and handle increasingly sensitive data, robust security frameworks aren't just nice to have features, they're essential.”

As the AI race heats up and other competitive models and services enter the market, keeping these technologies secure is a major consideration for technology and cybersecurity professionals on the front lines. Experts and industry insiders believe that issues with DeekSeek and other platforms are signs of trouble to come.

AI Platforms Are Vulnerable to Multiple Attacks

While flaws within AI systems have been known for several years, the security incident involving DeepSeek hammered home these realities.

After its debut and quick ascent up the App Store, DeepSeek began limiting registration access due to what the company called “large-scale malicious attacks” targeting the platform. The company didn’t elaborate, but it appears the incident might have been a distributed denial of service (DDoS) attack coupled with the attention the announcement generated. “The reported cyberattack on DeepSeek likely falls into one of several scenarios, with the most probable being simply a victim of their own success—what we in tech circles call the 'Slashdot effect,' where their infrastructure buckled under unexpected user demand following their viral moment on the App Store,” Lewis added.

Eric Schwake, director of cybersecurity strategy at Salt Security, noted that while DeepSeek did not release many details, the company appears to have vulnerabilities within its APIs that attackers exploited.

“Enterprises contemplating integrating AI models, particularly from fledgling startups, must prioritize API security,” Schwake told Dice. “This involves performing comprehensive security evaluations, establishing robust authentication and authorization protocols and maintaining ongoing vigilance for possible vulnerabilities.”

In the days that followed the DeepSeek reveal, other researchers began finding additional flaws. Cybersecurity firm Wiz, for example, uncovered a publicly accessible ClickHouse database belonging to DeepSeek that, if exploited, allows for full control of database operations and exposed information such as chat history and secret keys.

The DDoS attack and other vulnerabilities point to mounting security concerns with how some AI platforms are built, and why tech and security pros need to be aware of attackers looking to take advantage of these vulnerabilities, said Stephen Kowski, field CTO of SlashNext.

“While DDoS attacks are an obvious concern, the more insidious threats likely involve probing URL parameters, API endpoints and input validation mechanisms to manipulate or compromise the AI model's responses potentially,” Kowski told Dice. “The motivations span from competitive intelligence gathering to potentially using the infrastructure as a launchpad for broader attacks, especially given the open-source nature of the technology. The high-profile success and advanced AI capabilities make DeepSeek an attractive target for opportunistic attackers and those seeking to understand or exploit AI system vulnerabilities.”

Data Security Concerns

While AI platforms have vulnerabilities and flaws, there are also security and privacy concerns about the data that end users enter into their respective prompts. Questions remain around DeepSeek’s backers, and it’s not clear if data inputted into the platform would be stored and reside in China, where government officials and agencies could access it.

“DeepSeek’s AI tools apply the same rules of risk to sensitive corporate information. Organizations must now urgently audit and track their AI assets to prevent potential data exposure to China,” Gal Ringel, co-founder and CEO at security firm Mine, told Dice. “This isn't just about knowing what AI tools are being used; it's about understanding where company data flows and ensuring robust safeguards are in place so it doesn’t inadvertently end up in the wrong hands. The parallels to TikTok are striking, but the stakes may be even higher when considering the potential exposure of business data ending up in adversarial hands.”

It’s also not clear if the U.S. government is preparing to address these concerns and how federal agencies should respond. 

Under former President Joe Biden, the White House issued an executive order to force U.S. companies to build secure and safe AI models, share information and improve overall cybersecurity, but President Donald Trump rescinded that order, calling for greater investment and innovation.

Now is the time to understand companies like DeepSeek and the specifics of how their models work. “Users, such as citizens, and enterprises, whether public or private sector, should reflect on both what they submit to a service, as well as their ability to effectively manage the worldview and perspective of responses provided,” Trey Ford, CISO at Bugcrowd, told Dice. “The clear involvement of nation-state backed software and service offerings like these are worthy of reflection before use.”