OpenAI rival DeepSeek limits registration after ‘large-scale malicious attacks’
Chinese artificial intelligence startup DeepSeek said it planned to limit signups after seeing malicious attacks on the services it provides.
The company — which drew headlines over the last week for offering a powerful rival to OpenAI’s ChatGPT — reported issues starting on Monday morning that caused degraded performance.
“Due to large-scale malicious attacks on DeepSeek's services, we are temporarily limiting registrations to ensure continued service,” the company said on a status page. “Existing users can log in as usual. Thanks for your understanding and support.”
DeepSeek did not respond to requests for comment about the source of the attacks.
The company’s AI assistant took over ChatGPT’s spot on Apple’s App Store on Monday after a flood of interest over the weekend.
The Chinese startup released a new reasoning model called R1 that is designed to rival OpenAI’s o1. But DeepSeek made its open-source, allowing any developer to create products with it.
Despite only emerging over the last two years, DeepSeek’s recent surge in interest caused stock market fluctuations among the leading technology giants in the U.S. The reported costs associated with the company’s flagship tool are a fraction of what is being spent on similar projects by U.S. tech giants like Google and Facebook.
An AI arms race has kicked off since powerful large language models (LLMs) emerged commercially several years ago. Cybersecurity incidents at OpenAI have since drawn attention.
In October, OpenAI reported that China-based hacking groups conducted phishing attacks “against the personal and corporate email addresses of OpenAI employees.”
The threat actor, which they called SweetSpecter, began targeting OpenAI employees in May 2024
“OpenAI’s security team contacted employees who were believed to have been targeted in this spear phishing campaign and found that existing security controls prevented the emails from ever reaching their corporate emails,” the company said.
But in July, the New York Times reported that hackers breached the internal messaging systems of OpenAI and stole details about the company’s technology in 2023.
The breach was not reported to the public or U.S. law enforcement, and an employee of the company warned that OpenAI was not doing enough to protect its technology from being stolen by governments in China, Russia and other countries.
The security of U.S.-made AI models was a key part of the Biden administration’s executive order on artificial intelligence. But the order was rescinded last week when President Donald Trump took over.
Jonathan Greig
is a Breaking News Reporter at Recorded Future News. Jonathan has worked across the globe as a journalist since 2014. Before moving back to New York City, he worked for news outlets in South Africa, Jordan and Cambodia. He previously covered cybersecurity at ZDNet and TechRepublic.