AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsIntroducing Swift-Huggingface: The Complete Swift Client for Hugging Face
Introducing Swift-Huggingface: The Complete Swift Client for Hugging Face
AI

Introducing Swift-Huggingface: The Complete Swift Client for Hugging Face

•December 5, 2025
0
Hugging Face
Hugging Face•Dec 5, 2025

Companies Mentioned

Microsoft

Microsoft

MSFT

Why It Matters

By solving download reliability and authentication pain points, swift‑huggingface accelerates Swift‑based AI development and enables cross‑language model sharing, strengthening the Swift ML ecosystem.

Key Takeaways

  • •Swift‑huggingface replaces HubApi in swift‑transformers
  • •Supports resumable, progress‑tracked model downloads
  • •Shares cache with Python huggingface_hub
  • •TokenProvider unifies environment, file, and Keychain tokens
  • •OAuth 2.0 built‑in for user‑facing Swift apps

Pulse Analysis

Swift is rapidly emerging as a viable language for on‑device machine‑learning, yet developers have struggled with fragmented tooling for accessing large models. The original swift‑transformers package offered a thin wrapper around the Hugging Face Hub, but it suffered from slow, unreliable downloads and duplicated caches that forced teams to maintain separate model stores for Swift and Python. These friction points limited the appeal of Swift for production‑grade AI workloads, especially in mobile and edge scenarios where bandwidth and storage efficiency are paramount.

The swift‑huggingface client addresses these shortcomings with a ground‑up rewrite that leverages URLSession’s download tasks, file‑locking, and a unified TokenProvider pattern. Its resumable download engine tracks progress accurately and can pick up where it left off after interruptions, while the shared cache mirrors the Python huggingface_hub layout, eliminating redundant network traffic across language boundaries. Authentication is streamlined through environment variables, token files, or secure Keychain storage, and the built‑in OAuth 2.0 flow simplifies user‑sign‑in experiences for consumer apps.

For businesses, the package translates into faster time‑to‑market for AI‑enhanced Swift applications and lower operational costs due to reduced bandwidth and storage duplication. Developers can now pull models directly from the Hub, reuse existing Python‑downloaded assets, and integrate inference endpoints without custom networking code. As the Swift community adopts swift‑huggingface, we can expect broader ecosystem support, more robust on‑device AI products, and a tighter convergence between Swift and the broader machine‑learning tooling landscape.

Introducing swift-huggingface: The Complete Swift Client for Hugging Face

Introducing swift‑huggingface: The Complete Swift Client for Hugging Face

Author: Mattt

Published: December 5, 2025


Today we’re announcing swift‑huggingface, a new Swift package that provides a complete client for the Hugging Face Hub. You can start using it today as a standalone package, and it will soon integrate into swift‑transformers as a replacement for its current HubApi implementation.

The Problem

When we released swift‑transformers 1.0 earlier this year, we heard loud and clear from the community:

  • Downloads were slow and unreliable. Large model files (often several gigabytes) would fail partway through with no way to resume. Developers resorted to manually downloading models and bundling them with their apps — defeating the purpose of dynamic model loading.

  • No shared cache with the Python ecosystem. The Python transformers library stores models in ~/.cache/huggingface/hub. Swift apps downloaded to a different location with a different structure. If you’d already downloaded a model using the Python CLI, you’d download it again for your Swift app.

  • Authentication is confusing. Where should tokens come from? Environment variables? Files? Keychain? The answer is, “It depends”, and the existing implementation didn’t make the options clear.

Introducing swift‑huggingface

swift‑huggingface is a ground‑up rewrite focused on reliability and developer experience. It provides:

  • Complete Hub API coverage – models, datasets, spaces, collections, discussions, and more.

  • Robust file operations – progress tracking, resume support, and proper error handling.

  • Python‑compatible cache – share downloaded models between Swift and Python clients.

  • Flexible authentication – a TokenProvider pattern that makes credential sources explicit.

  • OAuth support – first‑class support for user‑facing apps that need to authenticate users.

  • Xet storage backend support (Coming soon!) – chunk‑based deduplication for significantly faster downloads.

Let’s look at some examples.

Flexible Authentication with TokenProvider

One of the biggest improvements is how authentication works. The TokenProvider pattern makes it explicit where credentials come from:


import HuggingFace



// For development: auto‑detect from environment and standard locations

// Checks HF_TOKEN, HUGGING_FACE_HUB_TOKEN, ~/.cache/huggingface/token, etc.

let client = HubClient.default



// For CI/CD: explicit token

let client = HubClient(tokenProvider: .static("hf_xxx"))



// For production apps: read from Keychain

let client = HubClient(tokenProvider: .keychain(service: "com.myapp", account: "hf_token"))

The auto‑detection follows the same conventions as the Python huggingface_hub library:

  1. HF_TOKEN environment variable

  2. HUGGING_FACE_HUB_TOKEN environment variable

  3. HF_TOKEN_PATH environment variable (path to token file)

  4. $HF_HOME/token file

  5. ~/.cache/huggingface/token (standard HF CLI location)

  6. ~/.huggingface/token (fallback location)

If you’ve already logged in with hf auth login, swift‑huggingface will automatically find and use that token.

OAuth for User‑Facing Apps

Building an app where users sign in with their Hugging Face account? swift‑huggingface includes a complete OAuth 2.0 implementation:


import HuggingFace



// Create authentication manager

let authManager = try HuggingFaceAuthenticationManager(

    clientID: "your_client_id",

    redirectURL: URL(string: "yourapp://oauth/callback")!,

    scope: [.openid, .profile, .email],

    keychainService: "com.yourapp.huggingface",

    keychainAccount: "user_token"

)



// Sign in user (presents system browser)

try await authManager.signIn()



// Use with Hub client

let client = HubClient(tokenProvider: .oauth(manager: authManager))



// Tokens are automatically refreshed when needed

let userInfo = try await client.whoami()

print("Signed in as: \(userInfo.name)")

The OAuth manager handles token storage in Keychain, automatic refresh, and secure sign‑out—no more manual token management.

Reliable Downloads

Downloading large models is now straightforward with proper progress tracking and resume support:


// Download with progress tracking

let progress = Progress(totalUnitCount: 0)



Task {

    for await _ in progress.publisher(for: \.fractionCompleted).values {

        print("Download: \(Int(progress.fractionCompleted * 100))%")

    }

}



let fileURL = try await client.downloadFile(

    at: "model.safetensors",

    from: "microsoft/phi-2",

    to: destinationURL,

    progress: progress

)

If a download is interrupted, you can resume it:


// Resume from where you left off

let fileURL = try await client.resumeDownloadFile(

    resumeData: savedResumeData,

    to: destinationURL,

    progress: progress

)

For downloading entire model repositories, downloadSnapshot handles everything:


let modelDir = try await client.downloadSnapshot(

    of: "mlx-community/Llama-3.2-1B-Instruct-4bit",

    to: cacheDirectory,

    matching: ["*.safetensors", "*.json"], // Only download what you need

    progressHandler: { progress in

        print("Downloaded \(progress.completedUnitCount) of \(progress.totalUnitCount) files")

    }

)

The snapshot function tracks metadata for each file, so subsequent calls only download files that have changed.

Shared Cache with Python

Remember the second problem we mentioned? “No shared cache with the Python ecosystem.” That’s now solved. swift‑huggingface implements a Python‑compatible cache structure that allows seamless sharing between Swift and Python clients:


~/.cache/huggingface/hub/

├── models--deepseek-ai--DeepSeek-V3.2/

│   ├── blobs/

│   │   └── <<etag>>          # actual file content

│   ├── refs/

│   │   └── main              # contains commit hash

│   └── snapshots/

│       └── <<commit_hash>>/

│           └── config.json   # symlink → ../../blobs/<<etag>>

What this means

  • Download once, use everywhere. If you’ve already downloaded a model with the hf CLI or the Python library, swift‑huggingface will find it automatically.

  • Content‑addressed storage. Files are stored by their ETag in the blobs/ directory. If two revisions share the same file, it’s stored only once.

  • Symlinks for efficiency. Snapshot directories contain symlinks to blobs, minimizing disk usage while maintaining a clean file structure.

The cache location follows the same environment‑variable conventions as Python:

  1. HF_HUB_CACHE environment variable

  2. HF_HOME environment variable + /hub

  3. ~/.cache/huggingface/hub (default)

You can also use the cache directly:


let cache = HubCache.default



// Check if a file is already cached

if let cachedPath = cache.cachedFilePath(

    repo: "deepseek-ai/DeepSeek-V3.2",

    kind: .model,

    revision: "main",

    filename: "config.json"

) {

    let data = try Data(contentsOf: cachedPath)

    // Use cached file without any network request

}

To prevent race conditions when multiple processes access the same cache, swift‑huggingface uses file locking (flock(2)).

Before and After

Before: Using the old HubApi in swift‑transformers:


// Before: HubApi in swift‑transformers

let hub = HubApi()

let repo = Hub.Repo(id: "mlx-community/Llama-3.2-1B-Instruct-4bit")



// No progress tracking, no resume, errors swallowed

let modelDir = try await hub.snapshot(

    from: repo,

    matching: ["*.safetensors", "*.json"]

) { progress in

    // Progress object exists but wasn’t always accurate

    print(progress.fractionCompleted)

}

After: Using swift‑huggingface:


// After: swift‑huggingface

let client = HubClient.default

let modelDir = try await client.downloadSnapshot(

    of: "mlx-community/Llama-3.2-1B-Instruct-4bit",

    to: cacheDirectory,

    matching: ["*.safetensors", "*.json"],

    progressHandler: { progress in

        // Accurate progress per file

        print("\(progress.completedUnitCount)/\(progress.totalUnitCount) files")

    }

)

The API is similar, but the implementation is completely different—built on URLSession download tasks with proper delegate handling, resume‑data support, and metadata tracking.

Beyond Downloads

swift‑huggingface contains a complete Hub client:


// List trending models

let models = try await client.listModels(

    filter: "library:mlx",

    sort: "trending",

    limit: 10

)



// Get model details

let model = try await client.getModel("mlx-community/Llama-3.2-1B-Instruct-4bit")

print("Downloads: \(model.downloads ?? 0)")

print("Likes: \(model.likes ?? 0)")



// Work with collections

let collections = try await client.listCollections(owner: "huggingface", sort: "trending")



// Manage discussions

let discussions = try await client.listDiscussions(kind: .model, "username/my-model")

And that’s not all—swift‑huggingface also provides full access to Hugging Face Inference Providers, giving your app instant access to hundreds of models for text generation, image classification, audio processing, and more.


swift‑huggingface brings the reliability, performance, and developer ergonomics that the Swift community has been asking for. Try it out today and let us know what you build!

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...