DeepSeek-R1 Under Fire for Security Risks & AI Jailbreaking Vulnerabilities

DeepSeek-R1 AI model faces criticism for security vulnerabilities and harmful content generation.
🕧 5 min

BEIJING, CHINA, 10th February, 2025 – DeepSeek, the Chinese AI company that has been gaining attention in Silicon Valley and on Wall Street, is facing scrutiny over the security of its latest AI model, R18.

Recent reports indicate that R1 is more vulnerable to “jailbreaking” than other AI models, making it easier to manipulate into generating harmful content. Sam Rubin, Senior Vice President at Palo Alto Networks’ Unit 42, stated that DeepSeek is “more vulnerable to jailbreaking” compared to other models.

 This means that the AI’s security filters can be bypassed, leading to produce illicit or dangerous content. Testing by The Wall Street Journal revealed that DeepSeek’s R1 could be convinced to design a social media campaign that “preys on teens’ desire for belonging, weaponizing emotional vulnerability through algorithmic amplification”. Further tests showed that the chatbot provided instructions for a bioweapon attack, wrote a pro-Hitler manifesto, and composed a phishing email with malware.

When ChatGPT was given the same prompts, it refused to comply, highlighting the security gap in DeepSeek’s system. Research scientists at Enkrypt AI found that 83% of bias tests successfully produced discriminatory output, and 45% of harmful content tests bypassed safety protocols. In one test, DeepSeek-R1 even drafted a persuasive recruitment blog for a terrorist organization.

Cisco and the University of Pennsylvania conducted a security assessment that revealed critical safety flaws in DeepSeek R1. Using algorithmic jailbreaking techniques, their team achieved a 100% attack success rate, meaning the model failed to block any harmful prompts. This contrasts with other leading models that demonstrate at least partial resistance.



Security firm, Lee Woon & Company via their “Safe X Red Team”, found jailbreaking attacks were successful 63% of the time and that the model is 18% more vulnerable than English when attacking Korean in safety and security evaluations.

Yoon Doo-sik, CEO of Lee Woon & Company, further said,

“We have created an environment where general companies can actively develop high-performance AI services by easily introducing the open-source Dipsyck model,” but added, “However, it is essential to ensure the security and safety of AI models in such an environment.”

A security researcher discovered significant misconfigurations in DeepSeek’s deployment, exposing sensitive AI-related data. This included chat logs, system metadata, and API credentials. Such exposures can lead to data breaches, adversarial manipulations, and unauthorized access to AI models.

Experts emphasize the need for proactive AI security measures to prevent data leaks, unauthorized access, and potential adversarial attacks. Robust safeguards, including guardrails and continuous monitoring, are essential to prevent harmful misuse and that AI safety must evolve alongside innovation.

Read Latest Stories:

Repsol Partners with Accenture to Implement AI Agents

OpenAI Introduces European Data Residency for Enhanced Compliance

Converge Technology Solutions to Merge with Mainline Under H.I.G. Capital

  • Amreen Shaikh is a skilled writer at IT Tech Pulse, renowned for her expertise in exploring the dynamic convergence of business and technology. With a sharp focus on IT, AI, machine learning, cybersecurity, healthcare, finance, and other emerging fields, she brings clarity to complex innovations. Amreen’s talent lies in crafting compelling narratives that simplify intricate tech concepts, ensuring her diverse audience stays informed and inspired by the latest advancements.

Recommended Reads :