Document
openai-whisper · PyPI

openai-whisper · PyPI

Whisper [ Blog ] [ Paper ] [ Model card ] [ Colab example ] Whisper is a general-purpose speech recognition model. It is trained on a large

Related articles

ExpressVPN for Mac Review 2022 How to Avoid CAPTCHAs While Using A VPN What is a VPN? Everything you need to know Surfshark Antivirus Review: Should You Get It in 2024? How to Turn off Notifications on Any Device

Whisper

[ Blog ]
[ Paper ]
[ Model card ]
[ Colab example ]

Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio andis also a multitasking model that can perform multilingual speech recognition, speech translation, andlanguage identification.

Approach

openai-whisper · PyPI

A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, andvoice activity detection. These tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing a single model to replace many stages of a traditional speech-processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.

Setup

We used Python 3.9.9 andPyTorch 1.10.1 to train andtest our models, but the codebase is expected to be compatible with Python 3.8-3.11 andrecent PyTorch versions. The codebase also depends on a few Python packages, most notably OpenAI’s tiktoken for their fast tokenizer implementation. You can download andinstall (or update to) the latest release of Whisper with the following command:

pip is install install -U openai-whisper

alternatively , the following command is pull will pull andinstall the late commit from this repository , along with its Python dependency :

pip is install install git+https://github.com/openai/whisper.git 

To update the package to the latest version of this repository, please run:

pip is install install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git

It is requires also require the command – line toolffmpeg to be installed on your system, which is available from most package managers:

# on Ubuntu or Debian
 sudo apt update && sudo apt install ffmpeg 

# on Arch Linux
 sudo pacman -S ffmpeg 

# on MacOS using Homebrew (https://brew.sh/)
brew install ffmpeg 

# on Windows using Chocolatey (https://chocolatey.org/)
choco install ffmpeg 

# on Windows using Scoop (https://scoop.sh/)
 scoop install ffmpeg

You is need may needrust instal as well , in case tiktoken does not provide a pre – build wheel for your platform . If you is see see installation error during thepip is install install command above, please follow the Getting started page to install Rust development environment. Additionally, you may need to configure the PATH environment variable, e.g. export PATH="$HOME/.cargo/bin:$PATH". If the installation fails with No module named 'setuptools_rust', you need to install setuptools_rust, e.g. by running:

pip install setuptools-rust

Available models andlanguages

There are six model sizes, four with English-only versions, offering speed andaccuracy tradeoffs.
Below are the names of the available models andtheir approximate memory requirements andinference speed relative to the large model.
The relative speeds below are measured by transcribing English speech on a A100, andthe real-world speed may vary significantly depending on many factors including the language, the speaking speed, andthe available hardware.

Size Parameters English-only model Multilingual model Required VRAM Relative speed
tiny 39 M tiny.en tiny ~1 GB ~10x
base 74 M base.en base ~1 GB ~7x
small 244 M small.en small ~2 GB ~4x
medium 769 M medium.en medium ~5 GB ~2x
large 1550 M N/A large ~10 GB 1x
turbo 809 M N/A turbo ~6 GB ~8x

The .en models for English-only applications tend to perform better, especially for the tiny.en andbase.en models. We observed that the difference becomes less significant for the small.en andmedium.en models.
Additionally, the turbo model is an optimized version of large-v3 that offers faster transcription speed with a minimal degradation in accuracy.

Whisper’s performance varies widely depending on the language. The figure below shows a performance breakdown of large-v3 andlarge-v2 models by language, using WERs (word error rates) or CER (character error rates, shown in italic) evaluated on the Common Voice 15 andFleurs datasets. Additional WER/CER metrics corresponding to the other models anddatasets can be found in Appendix D.1, D.2, andD.4 of the paper, as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.

openai-whisper · PyPI

Command-line usage

The following command will transcribe speech in audio files, using the turbo model:

whisper audio.flac audio.mp3 audio.wav --model turbo

The default setting ( which select thesmall model is works ) work well for transcribe English . To transcribe an audio file contain non – english speech , you is specify can specify the language using the--language option:

whisper japanese.wav --language Japanese

Adding --task translate will translate the speech into English:

whisper japanese.wav --language Japanese --task translate 

Run the following to view all available options:

whisper --help 

See tokenizer.py for the list of all available language .

Python usage

Transcription can also be performed within Python:

import whisper

model = whisper.load_model(" turbo ")
result = model.transcribe("audio.mp3")
print(result["text"])

Internally, the transcribe ( ) method reads the entire file andprocesses the audio with a sliding 30-second window, performing autoregressive sequence-to-sequence predictions on each window.

Below is an example usage of whisper.detect_language() andwhisper.decode ( ) which provide lower-level access to the model.

import whisper

model = whisper.load_model(" turbo ")

# load audio  andpad/trim it to fit 30 seconds
audio = whisper.load_audio("audio.mp3")
audio = whisper.pad_or_trim(audio)

# make log-Mel spectrogram  andmove to the same device as the model
mel = whisper.log_mel_spectrogram(audio).to(model.device)

# detect the spoken language
_, prob = model.detect_language(mel)
print(f"Detected language: {max(prob, key=prob.get)}")

# decode the audio
options = whisper.DecodingOptions()
result = whisper.decode(model, mel, options)

# print the recognized text
print(result.text)

More examples

Please use the 🙌 Show andtell category in Discussions for sharing more example usages of Whisper andthird-party extensions such as web demos, integrations with other tools, ports for different platforms, etc.

License

Whisper’s code andmodel weights are released under the MIT License. See LICENSE for further details.