Microsoft Integrates DeepSeek AI Model into Cloud Platform

By Huma Ishfaq ⏐ 3 weeks ago ⏐ Newspaper Icon Newspaper Icon 2 min read
Microsoft Integrates Deepseek Ai Model Into Cloud Platform

Despite facing accusations of IP theft and potential terms of service violations from its close partner OpenAI, DeepSeek continues to capture Microsoft’s attention, with the tech giant eager to integrate its innovative new models into its cloud platform.

The so-called reasoning model from DeepSeek, R1, is now accessible on Microsoft’s platform for AI services for enterprises, Azure AI Foundry. This announcement was made yesterday by Microsoft. Version R1 on Azure AI Foundry has “undergone rigorous red teaming and safety evaluations,” according to a Microsoft blog post. These evaluations include “automated assessments of model behavior and extensive security reviews to mitigate potential risks.”

According to Microsoft, “distilled” versions of R1 will soon be available for local usage on Copilot+ PCs, their brand of Windows hardware that satisfies specific AI-ready criteria.

“As we continue expanding the model repertoire in Azure AI Foundry, we’re excited to see how developers and enterprises leverage […] R1 to tackle real-world challenges and deliver transformative experiences,” the company said in a blog post.

Microsoft allegedly began an investigation into DeepSeek’s possible misuse of its and OpenAI’s services, making the addition of R1 to Microsoft’s cloud services all the more puzzling. Microsoft security researchers have found evidence that DeepSeek may have stolen sensitive information in the autumn of 2024 through OpenAI’s application programming interface (API). The most prominent stakeholder in OpenAI, Microsoft, alerted the company to the suspicious behavior, according to Bloomberg.

R1 is very popular right now, and Microsoft might have decided to include it in its cloud services while it’s still fascinating.

It’s unclear if Microsoft changed the model to make it more accurate or to address issues with filtering. A test by NewsGuard, a group that checks information reliability, found that R1 gives wrong or no answers 83% of the time when asked about news topics. R1 skips out on 85 percent of Chinese-related questions in an independent test, which may be an effect of the government censorship that affects AI models created in China.

Related Posts

Meta Founder Mark Zuckerberg Sells Shares Worth $14.4 Million

By Huma Ishfaq ⏐ 2 hours ago

Meta CEO Mark Zuckerberg has recently sold a significant portion of his company shares, capitalizing on Meta’s near-record stock price. With the company’s shares…

Pakistani Citizens Warned Against Fake Police Commissioner Scam

By Huma Ishfaq ⏐ 4 hours ago

Cybercriminals are targeting Pakistani citizens with a phishing scam disguised as official emails from the Office of Commissioner Police Department. These fraudulent emails falsely…

Get Alerts