AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsThinking Machines Lab Makes Tinker Generally Available: Adds Kimi K2 Thinking And Qwen3-VL Vision Input
Thinking Machines Lab Makes Tinker Generally Available: Adds Kimi K2 Thinking And Qwen3-VL Vision Input
AI

Thinking Machines Lab Makes Tinker Generally Available: Adds Kimi K2 Thinking And Qwen3-VL Vision Input

•December 17, 2025
0
MarkTechPost
MarkTechPost•Dec 17, 2025

Companies Mentioned

Thinking Machines

Thinking Machines

Moonshot AI

Moonshot AI

OpenAI

OpenAI

DeepSeek

DeepSeek

NVIDIA

NVIDIA

NVDA

Why It Matters

By democratizing access to frontier LLM and vision‑language fine‑tuning without infrastructure overhead, Tinker accelerates AI development cycles and lowers entry barriers for enterprises and researchers, potentially reshaping the competitive landscape of generative AI services.

Key Takeaways

  • •Tinker API now generally available, no waitlist
  • •Supports 1‑trillion‑parameter Kimi K2 Thinking model
  • •Adds OpenAI‑compatible sampling endpoint for training checkpoints
  • •Enables image input via Qwen3‑VL vision models
  • •Qwen3‑VL fine‑tuning outperforms DINOv2 few‑shot classification

Pulse Analysis

The AI tooling market has long been constrained by the complexity of distributed training and the cost of managing GPU clusters. Thinking Machines Lab’s decision to make Tinker generally available removes a significant barrier, offering a plug‑and‑play API that abstracts the orchestration layer. This move aligns with a broader industry shift toward SaaS‑based model fine‑tuning platforms, allowing startups and large enterprises alike to iterate faster on large language models without heavy upfront investment.

Technical depth underpins Tinker’s appeal. By integrating the 1‑trillion‑parameter Kimi K2 Thinking MoE model, the service gives developers access to state‑of‑the‑art reasoning capabilities that excel at chain‑of‑thought prompting and tool use. The OpenAI‑compatible sampling endpoint simplifies migration for teams already using OpenAI’s client libraries, while LoRA adapters keep memory footprints low, enabling repeated experiments on massive models. This combination of high‑performance models and lightweight adaptation mechanisms positions Tinker as a versatile bridge between research prototypes and production workloads.

Perhaps the most compelling addition is multimodal support via Qwen3‑VL vision‑language models. By allowing image chunks to be interleaved with text in the same training loop, Tinker enables seamless fine‑tuning of vision‑language systems. Early benchmarks demonstrate that a Qwen3‑VL 235B model fine‑tuned on Tinker outperforms a DINOv2 baseline across datasets such as Caltech 101 and Stanford Cars, showcasing superior few‑shot learning. This performance boost signals a growing preference for large, unified models that can handle both visual and textual data, a trend that could accelerate the adoption of multimodal AI across industries.

Thinking Machines Lab Makes Tinker Generally Available: Adds Kimi K2 Thinking And Qwen3-VL Vision Input

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...