VENDORiQ Special Update: DeepSeek Offers Affordability and Advanced Reasoning but Represents a High-Risk Choice for Australian Businesses

Chinese AI startup DeepSeek's impressive, low-cost DeepSeek R1 model has drawn scrutiny over data practices and government ties, leading to an Australian ban on February 4, 2025.

The Latest

On January 20, 2025, Chinese AI startup DeepSeek released its AI model, DeepSeek R1, which quickly demonstrated its impressive reasoning capabilities at a fraction of the cost of existing market leaders. Concerns about running DeepSeek ‘as-a-Service’ with respect to its data practice vulnerabilities and ties to the Chinese government have emerged, prompting scrutiny from cybersecurity experts and global regulators. On February 4th, 2025, the Australian government announced a ban on the use of DeepSeek.

Why It’s Important

DeepSeek R1 is a cost-efficient alternative to leading AI models, offering advanced reasoning capabilities at a fraction of the cost of competitors like OpenAI. Additionally, the model’s reasoning capabilities have demonstrated instances of outperforming other generative AI models in specific problem-solving scenarios. For organisations considering AI adoption, DeepSeek’s energy-efficient design may reduce operational costs. This could be particularly attractive to budget-conscious small and medium enterprises (SMEs). By significantly decreasing the cost of large language models (LLMs), DeepSeek has the potential to democratise access to advanced AI capabilities, lowering the barrier of entry and enabling smaller businesses, startups, and underfunded organisations to leverage powerful AI tools that were previously dominated by well-funded tech giants. Notably, DeepSeek R1 is already available on Amazon Bedrock1 and Microsoft’s Azure AI Foundry2 making it easier for developers and businesses to experiment with and implement the model. 

Despite its low cost and promising technical capabilities, DeepSeek R1 poses several critical risks, particularly in the areas of data security, privacy, and confidentiality. When run as a SaaS solution, the model collects user data and stores it on servers in the People’s Republic of China (PRC), subjecting it to Chinese cyber security and intelligence laws. These laws mandate that companies operating within China must share data with state authorities upon request. It should be noted that this is not dissimilar to the legislative measures in the United States introduced through the Patriot Act. Nonetheless, DeepSeek’s ties to Chinese authorities and its use of servers in the PRC make it a potential vector for state-sponsored data collection or cyber-espionage. To mitigate this risk and safeguard their sensitive data, organisations can run DeepSeek locally under its open-source licensing model within secure, walled environments and maintain control over their data, ensuring compliance with local regulations.

Additionally, the model’s vulnerabilities to ‘jailbreaking’ techniques, such as the ‘Evil Jailbreak’, that allow malicious actors to exploit it for generating harmful outputs, raise questions about its safety. It is early days for DeepSeek and models such as OpenAI and Gemini also suffer from the ability to ‘jailbreak’ but have had more time to strengthen their guardrails.  The lack of transparency in DeepSeek’s data handling practices further exacerbates concerns, as does its potential for censorship or manipulation of information to align with Chinese geopolitical narratives.  

Finally, there are emerging suggestions that DeepSeek R1’s training and fine-tuning process was not as ‘budget-friendly’ as claimed. This claim is supported by the model’s rapid availability on platforms like Amazon Bedrock and Microsoft’s Azure AI Foundry for which the optimisation and scaling efforts would not have been trivial in terms of cost. DeepSeek’s branding as a low-cost disruptor may be contradicted if it’s revealed that their model used expensive infrastructure during training. This could lead to accusations of misleading marketing, a shift in public perception, and trust issues, especially concerning data handling and energy efficiency.

On February 4th, 2025, the Australian government announced a ban on the use of DeepSeek in government systems citing national security concerns.  Australia’s decision to ban DeepSeek from government systems is rooted in national security concerns and aligns with Australia’s cautious approach towards Chinese technology. The ban requires public servants to remove all DeepSeek products, applications, and services from federal government systems and devices. This action is consistent with Australia’s previous measures against Chinese technologies, such as the ban on TikTok from government devices in 2023 and the exclusion of Huawei from its 5G network in 2018.

Australian private sector businesses looking to utilise DeepSeek, particularly those in regulated industries with data sovereignty policies or those handling confidential data, should avoid DeepSeek’s Software-as-a-Solution (SaaS).  

Who’s Impacted

  • C-suite
  • CIO and CTO
  • CISO
  • Privacy officers

What’s Next?

  • If you are considering using DeepSeek, conduct a thorough risk assessment to determine whether the benefits outweigh the security and privacy risks. Pay particular attention to DeepSeek’s data collection practices and compliance on their SaaS platform.
  • If you are considering using DeepSeek, limit its use to non-sensitive applications or deploy it in a secure, walled environment to prevent data from being transmitted to external servers. Avoid sharing confidential or personally identifiable information with the model.
  • Stay informed about regulatory actions or updates related to the use of Chinese AI platforms. Monitor developments in DeepSeek’s security posture and data handling practices before making long-term commitments.
  1. DeepSeek-R1 models now available on AWS, AWS News Blog, 30th January 2025 ↩︎
  2. AWS has also begun hosting the Chinese model, TheChannelCo, 31st January 2025 ↩︎

Trouble viewing this article?

Search