
Quantum Physicists Have Shrunk and “De-Censored” DeepSeek R1
Companies Mentioned
Why It Matters
If scalable, the approach could lower the computational cost of large models while enabling users to bypass state‑imposed content filters, potentially reshaping the global AI information ecosystem and sparking regulatory and ethical debates over model tampering.
Summary
Quantum‑inspired AI firm Multiverse Computing announced a 55% smaller version of the Chinese‑censored large language model DeepSeek R1, dubbed DeepSeek R1 Slim, that it claims retains near‑original performance while stripping out state‑mandated censorship. The team used tensor‑network compression, a technique borrowed from quantum physics, to map and prune redundant parameters, then fine‑tuned the model to preserve output quality. In tests on 25 politically sensitive prompts, the uncensored model produced factual answers that were judged by OpenAI’s GPT‑5 as comparable to Western models, contrasting with the original’s refusal or propaganda‑laden replies. The work highlights a broader industry push to make LLMs more efficient and manipulable, raising questions about the feasibility of fully removing embedded political bias from AI trained under authoritarian regimes.
Quantum physicists have shrunk and “de-censored” DeepSeek R1
Comments
Want to join the conversation?
Loading comments...