TechForge

May 14, 2025

Share this story:

Tags:

Categories::

  • DeepSeek AI in Chinese hospitals faces safety warnings as researchers cite risks and errors.
  • 300+ hospitals deploy DeepSeek despite tendency for plausible but incorrect medical.

The integration of DeepSeek AI in Chinese hospitals currently comprises of the AI model’s widespread adoption in more than 300 healthcare facilities. Yet a sobering voice of caution has emerged from China’s medical establishment. A research perspective published in JAMA, led by Wong Tien Yin, founding head of Tsinghua Medicine, warns that the rapid deployment of DeepSeek’s large language models in clinical settings may be “too fast, too soon.”

The numbers paint a striking picture of China’s healthcare AI transformation. DeepSeek’s deployment in tertiary hospitals represents a significant shift in how AI is used beyond diagnostic assistance, extending into hospital administration, research facilitation, and patient management.

The company’s models have demonstrated remarkable efficiency gains, including a 40-fold increase in efficiency for patient follow-ups. The widespread adoption stems from DeepSeek’s unique positioning as an open-source, low-cost alternative to proprietary AI systems.

The LLMs DeepSeek-V3 and DeepSeek-R1, developed by a subsidiary of a Chinese investment company, have the distinct advantages of being low-cost and open-source, which has substantially diminished the accessibility barriers to use of LLMs.

Healthcare companies in China have integrated models into their operations quickly. More than 30 mainland-based health care companies have added AI to their operations, with firms like Hengrui Pharmaceuticals Co. Ltd. and Yunnan Baiyao Group Co. Ltd. among them.

Berry Genomics Co. Ltd. saw its shares jump over 71% following its adoption of multiple open-source AI models to enhance operational efficiency and reduce costs.

The warning signs: Clinical safety under scrutiny

Despite the enthusiasm surrounding DeepSeek AI, the JAMA research perspective raises significant red flags. Wong Tien Yin, an ophthalmology professor and former medical director at the Singapore National Eye Centre, along with his co-authors, identified several critical concerns.

The researchers warned that DeepSeek’s tendency to generate “plausible but factually incorrect outputs” could lead to “substantial clinical risk,” despite strong reasoning capabilities. The phenomenon, known as AI hallucination, poses particular dangers in medical settings where accuracy can be a matter of life and death.

The research team highlighted how healthcare professionals could become over-reliant on or uncritical of DeepSeek’s output, potentially resulting in diagnostic errors or treatment biases, while more cautious clinicians could face the burden of verifying AI output in time-sensitive clinical settings.

Infrastructure challenges and security vulnerabilities

Beyond clinical accuracy concerns, the rapid deployment of DeepSeek AI in Chinese hospitals has exposed significant cybersecurity vulnerabilities. While many hospitals opt for private, on-site deployment to mitigate security and privacy risks, this approach “shifts security responsibilities to individual healthcare facilities,” many of which lack a comprehensive cybersecurity infrastructure, the research claims.

Recent cybersecurity research has amplified concerns. Research indicates that DeepSeek is 11 times more likely to be exploited by cybercriminals than other AI models, highlighting a critical vulnerability in its design. A Cisco study found that DeepSeek failed to block harmful prompts in security assessments, including prompts related to cybercrime and misinformation.

The open-source nature of DeepSeek, while promoting accessibility, also creates unique security challenges. DeepSeek’s open-source structure means that anyone can download and modify the application, allowing users to alter not only its functionalities but also its safety mechanisms, creating a far greater risk of exploitation.

Real-world impact: Stories from the clinical frontlines

The integration of DeepSeek AI in Chinese hospitals has already begun changing the dynamics of doctor-patient relationships. A viral video on Douyin showed a frustrated doctor whose treatment was questioned by a patient using DeepSeek, only to discover that the medical guidelines had indeed been updated, and the AI was correct.

The anecdote illustrates both the potential and the peril of AI adoption in healthcare. While the technology can help keep medical practices current, it also challenges traditional medical hierarchies and introduces new sources of uncertainty in clinical decision-making.

The “perfect storm” for safety concerns

The researchers identified China’s unique healthcare landscape as creating a “perfect storm” for clinical safety concerns, citing the combination of disparities in primary care infrastructure and high smartphone penetration. They noted that “underserved populations with complex medical needs now have unprecedented access to AI-driven health recommendations, but often lack the clinical oversight needed for safe implementation.”

The democratisation of medical AI access, while potentially beneficial for healthcare equity, raises questions about the quality and safety of care in resource-limited settings where proper oversight may be lacking.

Geopolitical implications and data privacy

The rapid adoption of DeepSeek AI in Chinese hospitals has not gone unnoticed internationally. Several countries have taken precautionary measures, with Italy, Taiwan, Australia, and South Korea blocking or banning access to the app on government devices due to national security concerns regarding the app’s data management practices.

Privacy experts have raised concerns about data collection and storage. The Chinese chatbot could present a national security risk, as “that data, in aggregate, can be used to glean insights into a population, or user behaviours that could be used to create more effective phishing attacks, or other nefarious manipulation campaigns.”

The regulatory gap

Despite the widespread adoption, China’s regulatory framework has struggled to keep pace with the rapid deployment of AI in healthcare. Current regulatory interpretations allow AI to augment but not replace human diagnostic judgement, indicating a continued need for careful integration into medical services.

Notably, no medical AI products have been integrated into China’s national basic health insurance, suggesting continued scepticism about the technology’s reliability. That said, the story of DeepSeek AI in Chinese hospitals represents a microcosm of the broader challenges facing AI adoption in critical sectors worldwide.

While the technology offers significant potential for improving healthcare delivery and reducing costs, the warnings from medical researchers underscore the need for careful, measured implementation.

Recent studies highlight the relative accuracy of DeepSeek’s models in specific metrics, like the Deauville score for lymphoma patients, but still acknowledge a considerable gap compared to human clinicians. The accuracy gap, combined with the security vulnerabilities and regulatory challenges, suggests that the current pace of adoption may indeed be “too fast, too soon.”

Conclusion: A critical juncture

As China continues its push toward “smart hospitals” and AI-driven healthcare transformation, the integration of DeepSeek AI in Chinese hospitals serves as both a testament to technological innovation and a cautionary tale about the risks of rapid deployment. The concerns raised by Wong Tien Yin and his colleagues at Tsinghua Medicine represent not opposition to progress, but a call for responsible innovation that prioritises patient safety alongside technological advancement.

The challenge moving forward will be finding the right balance between harnessing the undeniable benefits of AI in healthcare while implementing robust safeguards to protect patients from the risks of premature or inadequately supervised AI deployment.

The ongoing debate surrounding DeepSeek AI in Chinese hospitals ultimately reflects a fundamental question facing the global healthcare community: How fast is too fast when it comes to integrating powerful AI systems into life-critical medical applications? The answer to this question will shape the future of digital health, not just in China, but worldwide.

About the Author

Dashveenjit Kaur

Dashveen writes for Tech Wire Asia and TechHQ, providing research-based commentary on the exciting world of technology in business. Previously, she reported on the ground of Malaysia’s fast-paced political arena and stock market.

Related

September 10, 2025

September 10, 2025

September 9, 2025

September 8, 2025

Join our Community

Subscribe now to get all our premium content and latest tech news delivered straight to your inbox

Popular

34475 view(s)
6317 view(s)
6279 view(s)
5772 view(s)

Subscribe

All our premium content and latest tech news delivered straight to your inbox

This field is for validation purposes and should be left unchanged.