Transformers.js

Run 🤗 Transformers in your browser!

to
Max length
No. beams
No. samples
Temp.
Top K

Context:

Question:

Answer:

Notes:
  • Clicking Generate for the first time will download the corresponding model from the HuggingFace Hub. All subsequent requests will use the cached model.
  • For more information about the different parameters, check out HuggingFace's guide to text generation.

Getting Started

Installation
If you use npm, you can install it using:
npm i @xenova/transformers
Alternatively, you can use it in a <script> tag from a CDN, for example:
<!-- Using jsDelivr -->
<script src="https://cdn.jsdelivr.net/npm/@xenova/transformers/dist/transformers.min.js"></script>

<!-- or UNPKG -->
<script src="https://www.unpkg.com/@xenova/transformers/dist/transformers.min.js"></script>
Basic example
It's super easy to translate from existing code!
from transformers import pipeline

# Allocate a pipeline for sentiment-analysis
pipe = pipeline('sentiment-analysis')

out = pipe('I love transformers!')
# [{'label': 'POSITIVE', 'score': 0.999806941}]

Python (original)

import { pipeline } from "@xenova/transformers";

// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline('sentiment-analysis');

let out = await pipe('I love transformers!');
// [{'label': 'POSITIVE', 'score': 0.999817686}]

JavaScript (ours)


In the same way as the Python library, you can use a different model by providing its name as the second argument to the pipeline function. For example:
// Use a different model for sentiment-analysis
let pipe = await pipeline('sentiment-analysis', 'nlptown/bert-base-multilingual-uncased-sentiment');
Custom setup
By default, Transformers.js uses hosted models precompiled WASM binaries, which should work out-of-the-box. You can override this behaviour as follows:
import { env } from "@xenova/transformers";

// Use a different host for models.
// - `remoteURL` defaults to use the HuggingFace Hub
// - `localURL` defaults to '/models/onnx/quantized/'
env.remoteURL = 'https://www.example.com/';
env.localURL = '/path/to/models/';

// Set whether to use remote or local models. Defaults to true.
//  - If true, use the path specified by `env.remoteURL`.
//  - If false, use the path specified by `env.localURL`.
env.remoteModels = false;

// Set parent path of .wasm files. Defaults to use a CDN.
env.onnx.wasm.wasmPaths = '/path/to/files/';

Usage

We currently support the following tasks and models, which can be used with the pipeline function.

  1. sentiment-analysis (a.k.a. text-classification)
    Supported models: distilbert-base-uncased-finetuned-sst-2-english, nlptown/bert-base-multilingual-uncased-sentiment, distilgpt2. For more information, check out the Text Classification docs.
  2. question-answering
    Supported models: distilbert-base-cased-distilled-squad, distilbert-base-uncased-distilled-squad. For more information, check out the Question Answering docs.
  3. fill-mask
    Supported models: xlm-roberta-base, albert-large-v2, albert-base-v2, distilroberta-base, roberta-base, bert-base-cased, bert-base-uncased, bert-base-multilingual-uncased, bert-base-multilingual-cased, distilbert-base-cased, distilbert-base-uncased. For more information, check out the Language Modelling docs.
  4. summarization
    Supported models: t5-small, t5-base, t5-v1_1-small, t5-v1_1-base, facebook/bart-large-cnn, sshleifer/distilbart-cnn-6-6, sshleifer/distilbart-cnn-12-6. For more information, check out the Summarization docs.
  5. translation (translation_en_to_de, translation_en_to_fr, translation_en_to_ro)
    Supported models: t5-small, t5-base, t5-v1_1-small, t5-v1_1-base. For more information, check out the Translation docs.
  6. text2text-generation
    Supported models: google/flan-t5-small, google/flan-t5-base, t5-small, t5-base, google/t5-v1_1-small, google/t5-v1_1-base, google/mt5-small, facebook/bart-large-cnn, sshleifer/distilbart-cnn-6-6, sshleifer/distilbart-cnn-12-6. For more information, check out the Text Generation docs.
  7. text-generation
    Supported models: gpt2, distilgpt2, EleutherAI/gpt-neo-125M, Salesforce/codegen-350M-mono, Salesforce/codegen-350M-multi, Salesforce/codegen-350M-nl. For more information, check out the Text Generation docs.
  8. automatic-speech-recognition
    Supported models: openai/whisper-tiny.en, openai/whisper-tiny, openai/whisper-small.en, openai/whisper-small, openai/whisper-base.en, openai/whisper-base. For more information, check out the Automatic Speech Recognition docs.
  9. image-to-text
    Supported models: nlpconnect/vit-gpt2-image-captioning. For more information, check out the Image-to-Text docs.
  10. image-classification
    Supported models: google/vit-base-patch16-224. For more information, check out the Image Classification docs.
  11. zero-shot-image-classification
    Supported models: openai/clip-vit-base-patch16, openai/clip-vit-base-patch32. For more information, check out the Zero-Shot Image Classification.
  12. object-detection
    Supported models: facebook/detr-resnet-50, facebook/detr-resnet-101. For more information, check out the Object detection docs.
  13. embeddings
    Supported models: sentence-transformers/all-MiniLM-L6-v2, sentence-transformers/all-MiniLM-L12-v2, sentence-transformers/all-distilroberta-v1, sentence-transformers/paraphrase-albert-base-v2, sentence-transformers/paraphrase-albert-small-v2. For more information, check out the Embeddings docs.

The following model types are supported:

  1. BERT
    Tasks: Masked language modelling (AutoModelForMaskedLM), question answering (AutoModelForQuestionAnswering), and sequence classification (AutoModelForSequenceClassification). For more information, check out the BERT docs.
  2. ALBERT
    Tasks: Masked language modelling (AutoModelForMaskedLM). For more information, check out the ALBERT docs.
  3. DistilBERT
    Tasks: Masked language modelling (AutoModelForMaskedLM), question answering (AutoModelForQuestionAnswering), and sequence classification (AutoModelForSequenceClassification). For more information, check out the DistilBERT docs.
  4. T5
    Tasks: Sequence-to-sequence for translation/summarization (AutoModelForSeq2SeqLM). For more information, check out the T5 docs.
  5. T5v1.1
    Tasks: Sequence-to-sequence (AutoModelForSeq2SeqLM). For more information, check out the T5v1.1 docs.
  6. FLAN-T5
    Tasks: Sequence-to-sequence for over 1000 tasks (AutoModelForSeq2SeqLM). For more information, check out the FLAN-T5 docs.
  7. mT5
    Tasks: Sequence-to-sequence (AutoModelForSeq2SeqLM). For more information, check out the mT5 docs.
  8. GPT2/DistilGPT2
    Tasks: Text generation (AutoModelForCausalLM). For more information, check out the GPT2 docs or DistilGPT2 docs.
  9. GPT Neo
    Tasks: Text generation (AutoModelForCausalLM). For more information, check out the GPT Neo docs.
  10. BART
    Tasks: Sequence-to-sequence for summarization (AutoModelForSeq2SeqLM). For more information, check out the BART docs.
  11. CodeGen
    Tasks: Text generation (AutoModelForCausalLM). For more information, check out the CodeGen docs.
  12. Whisper
    Tasks: Automatic speech recognition (AutoModelForSeq2SeqLM). For more information, check out the Whisper docs.
  13. CLIP
    Tasks: Zero-shot Image classification (AutoModel). For more information, check out the CLIP docs.
  14. Vision Transformer (ViT)
    Tasks: Image classification (AutoModelForImageClassification). For more information, check out the Vision Transformer docs.
  15. VisionEncoderDecoderModel
    Tasks: Image to text (AutoModelForVision2Seq). For more information, check out the Vision Encoder Decoder Models docs.
  16. DETR
    Tasks: Object detection (AutoModelForObjectDetection). For more information, check out the DETR docs.

Don't see your model type or task supported? Raise an issue on GitHub, and if there's enough demand, we will add it!

We use ONNX Runtime to run the models in the browser, so you must first convert your PyTorch model to ONNX (which can be done using our conversion script).

Examples

Coming soon... In the meantime, check out the source code for the demo here.