Home
Omar Hosney
Linkedin Profile

๐Ÿค— Hugging Face Hub Client Library Cheatsheet

Quickstart ๐Ÿ

The Hugging Face Hub is the central place to discover and share ML models, datasets, and Spaces.

Installation ๐Ÿ’ป

Start by installing the huggingface_hub library:

pip install --upgrade huggingface_hub

For optional features, install extra dependencies like:

pip install 'huggingface_hub[tensorflow]'

Download Files ๐Ÿ“ฅ

Use hf_hub_download() to download a specific file from a repo:

from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="google/pegasus-xsum", filename="config.json")

For an entire repo, use snapshot_download():

from huggingface_hub import snapshot_download
snapshot_download(repo_id="lysandre/arxiv-nlp")

Authentication ๐Ÿ”

You'll need a Hugging Face account and a User Access Token for many actions.

Create a Repo โž•

Use create_repo() to create a new repository:

from huggingface_hub import HfApi
api = HfApi()
api.create_repo(repo_id="my-new-model")

Set private=True for a private repo.

Upload Files ๐Ÿ“ค

Use upload_file() for a single file:

api.upload_file(
    path_or_fileobj="/path/to/README.md",
    path_in_repo="README.md",
    repo_id="my-username/my-repo"
)

For a folder, use upload_folder().


Inference API ๐Ÿง 

Run accelerated inference on Hugging Face servers with InferenceClient:

from huggingface_hub import InferenceClient
client = InferenceClient()

# Text to Image
image = client.text_to_image("A cat wearing a hat")
image.save("cat_with_hat.png")

# Chat Completion
messages = [{"role": "user", "content": "Translate 'Hello' to Spanish."}]
response = client.chat_completion(messages, model="google/flan-t5-xl")
print(response.choices[0].message.content) # ยกHola!

Manage Repos ๐Ÿ“

Use the HfApi to manage your repositories:

Discussions & Pull Requests ๐Ÿ’ฌ

Interact with the community using HfApi:


Collections ๐Ÿ“š

Organize models, datasets, Spaces, and papers with collections.

Cache Management ๐Ÿงน

The huggingface_hub library caches downloaded files to speed up future access.

Model Cards ๐Ÿ“

Create informative Model Cards to describe your models.


Login/Logout & Authentication ๐Ÿ”

Environment Variables ๐ŸŒณ

HfApi Client ๐ŸŒ


Hugging Face Hub - ๐Ÿง  Inference


Inference

Inference Services

Inference Client

The InferenceClient object connects to inference services. ๐Ÿ”Œ

Async Inference Client

An asynchronous version using asyncio and aiohttp. ๐Ÿ’ซ


Key Methods - More Details

Complete Example: Text-to-Image ๐Ÿ–ผ๏ธ

from huggingface_hub import InferenceClient
Initialize the InferenceClient
client = InferenceClient(token="YOUR_HUGGING_FACE_TOKEN")

Generate an image
image = client.text_to_image(
    prompt="A cat wearing a top hat riding a unicycle on a tightrope",
    model="stabilityai/stable-diffusion-2-1",  # Specify the text-to-image model
    height=512,  # Optional: Set image height
    width=512,   # Optional: Set image width
)

Save the image
image.save("cat_unicycle.png")

Complete Example: Text Generation โœ๏ธ

from huggingface_hub import InferenceClient
Initialize the InferenceClient
client = InferenceClient(token="YOUR_HUGGING_FACE_TOKEN")

Generate text
generated_text = client.text_generation(
    prompt="Once upon a time, in a land far away, ",
    model="gpt2",  # Specify the text generation model
    max_new_tokens=50,  # Limit the length of generated text
    temperature=0.7,  # Control the randomness (higher = more random)
)

Print the generated text
print(generated_text)

Tokenizers ๐Ÿค—

๐ŸŒŸ Main Features

๐Ÿš€ Quick Tour

๐Ÿ”ง Installation

๐Ÿงฐ Pre-Tokenizers

โš™๏ธ Models

๐Ÿ”„ Post-Processors

๐Ÿ› ๏ธ Normalizers

๐Ÿงช Training from Memory

๐Ÿ“š Components


๐Ÿค— Transformers Cheatsheet

Overview

Tasks Supported

Framework Interoperability

Getting Started

Key Pipelines

Auto Classes

Model Training

Installation Tips

Saving and Loading Models