AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsBuilding for an Open Future - Our New Partnership with Google Cloud
Building for an Open Future - Our New Partnership with Google Cloud
AI

Building for an Open Future - Our New Partnership with Google Cloud

•November 13, 2025
0
Hugging Face
Hugging Face•Nov 13, 2025

Companies Mentioned

Google Cloud

Google Cloud

Google

Google

GOOG

Mandiant

Mandiant

NVIDIA

NVIDIA

NVDA

Why It Matters

The collaboration also brings cost‑effective TPU support and enhanced model protection, reflecting both companies' vision of an open, secure AI future for all builders.

Building for an Open Future - our new partnership with Google Cloud

Back to Articles

Image 1: Jeff Boudier's avatar

Image 2: Simon Pagezy's avatar

Image 3: Three small smiling yellow cubes with stubby legs and arms running quickly down an asphalt road with yellow lines, accompanied in the sky by a smiling yellow ball with arms and the Google Cloud logo

Today, we are happy to announce a new and deeper partnership with Google Cloud, to enable companies to build their own AI with open models.

“Google has made some of the most impactful contributions to open AI, from the OG transformer to the Gemma models. I believe in a future where all companies will build and customize their own AI. With this new strategic partnership, we’re making it easy to do on Google Cloud.” says Jeff Boudier, at Hugging Face.

“Hugging Face has been the driving force enabling companies large and small all over the world to access, use and customize now more than 2 million open models, and we’ve been proud to contribute over 1,000 of our models to the community”, says Ryan J. Salva, Senior Director of Product Management at Google Cloud. “Together we will make Google Cloud the best place to build with open models.”

A Partnership for Google Cloud customers


Google Cloud customers use open models from Hugging Face in many of its leading AI services. In Vertex AI, the most popular open models are ready to deploy in a couple clicks within Model Garden. Customers who want greater control over their AI infrastructure can find a similar model library available in GKE AI/ML, or use pre-configured environments maintained by Hugging Face. Customers also run AI inference workloads with Cloud Run GPUs, enabling serverless open model deployments.

The common thread: we work with Google Cloud to build seamless experiences fully leveraging the unique capabilities of each service to offer choice to the customers.

Image 4: The image depicts a diagram showing Vertex AI, Google Kubernetes Engine, and Cloud Run, all connected to a yellow cartoon bird in a circle, with a red-blue-green-yellow color bar at the top

The Gateway to Open Models - A Fast Lane for Google Cloud Customers


Usage of Hugging Face by Google Cloud customers has grown 10x over the last 3 years, and today, this translates into tens of petabytes of model downloads every month, in billions of requests.

To make sure Google Cloud customers have the best experience building with models and datasets from Hugging Face, we are working together to create a CDN Gateway for Hugging Face repositories built on top of both Hugging Face Xet optimized storage and data transfer technologies, and Google Cloud advanced storage and networking capabilities.

This CDN Gateway will cache Hugging Face models and datasets directly on Google Cloud to significantly reduce downloading times, and strengthen model supply chain robustness for Google Cloud customers. Whether you’re using Vertex, GKE, Cloud Run or just building your own stack in VMs in Compute Engine, you will benefit from faster time-to-first-token and simplified model governance.

A partnership for Hugging Face customers


Hugging Face Inference Endpoints is the easiest way to go from model to deployment in just a couple clicks. Through this deepened partnership we will bring the unique capabilities and cost performance of Google Cloud to Hugging Face customers, starting with Inference Endpoints. Expect more and newer instances available as well as price drops!

Image 5: Create Endpoint with hardware configuration like CPU, GPU, Nvidia A100

We will ensure all the fruits of our product and engineering collaboration become easily available to the 10 million AI Builders on Hugging Face. Going from a model page to deploying on Vertex Model Garden or GKE should only take a couple steps. Taking a private model securely hosted in an Enterprise organization on Hugging Face should be as easy as working with public models.

TPUs, Google custom AI accelerator chips now in their seventh generation, have been steadily improving in performance and software stack maturity. We want to make sure Hugging Face users can fully benefit from the current and the next generations of TPUs when they build AI with open models. We are excited to make TPUs as easy to use as GPUs for Hugging Face models, thanks to native support in our libraries.

Additionally, this new partnership will enable Hugging Face to leverage Google industry-leading security technology to make the millions of open models on Hugging Face more secure. Powered by Google Threat Intelligence and Mandiant, this joint effort aims to secure models, datasets and Spaces as you use the Hugging Face Hub daily.

Building the open future of AI together


We want to see a future where every company can build their own AI with open models and host it within their own secure infrastructure, with full control. We are excited to make this future happen with Google Cloud. Our deep collaboration will accelerate this vision, whether you are using Vertex AI Model Garden, Google Kubernetes Engine, Cloud Run or Hugging Face Inference Endpoints.

Is there something you want us to create or improve thanks to our partnership with Google? Let us know in the comments!

Image 6: McDonald's sign reads Over 1 Billion Models Served with emoji face on a large yellow sign

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...