Compare commits

...

10 Commits

Author SHA1 Message Date
guillaume-be
d037b0f190
Merge branch 'main' into main 2024-08-18 10:11:15 +01:00
oiwn
eabebc61a8
add serde for Keyword (#465)
Co-authored-by: guillaume-be <guillaume.becquin@gmail.com>
2024-08-18 10:10:58 +01:00
guillaume-be
3df3816219
Fix clippy warnings (#466) 2024-08-18 09:54:32 +01:00
Ibrahim Ahmad (feyroozecode)
8802997c5f
Update Docs (#448)
the outpiout is in french not Spanish

Co-authored-by: guillaume-be <guillaume.becquin@gmail.com>
2024-08-18 09:34:54 +01:00
guillaume-be
e38ddaabb7
Merge branch 'main' into main 2024-08-18 08:34:17 +01:00
Abdulrhman Alkhodiry
33b2944298
Update dependencies and fix convert_model (#458)
* feat: Update dependencies in Cargo.toml

Update the dependencies in Cargo.toml to their latest versions:
- rust_tokenizers: 8.1.1
- tch: 0.16.0 (with features = ["download-libtorch"])
- serde_json: 1
- serde: 1 (with features = ["derive"])
- ordered-float: 4.2.0
- uuid: 1 (with features = ["v4"])
- thiserror: 1
- half: 2
- regex: 1.6
- cached-path: 0.6 (with default-features = false and optional = true)
- dirs: 5 (optional = true)
- lazy_static: 1 (optional = true)
- ort: 1.16.3 (optional = true, default-features = false, features = ["half"])
- ndarray: 0.15 (optional = true)
- tokenizers: 0.19.1 (optional = true, default-features = false, features = ["onig"])

```

* chore: Update .gitignore and requirements.txt, and improve convert_model.py

Update .gitignore to exclude the /models/ and /.venv/ directories, and the convert_model.log file.

Remove the requirements.txt file.

In convert_model.py:
- Add a new function, `zipfile_factory`, to handle zip file creation.
- Update the logger configuration to log debug messages to a file named `convert_model.log`.

* delete duplicate requirements file

* update CI req file path

* missing requests dependency

---------

Co-authored-by: Abdulrhman Alkhodiry <aalkhodiry@jahez.net>
Co-authored-by: Guillaume Becquin <guillaume.becquin@gmail.com>
2024-06-30 08:41:10 +01:00
dependabot[bot]
f99bf51f53
--- (#457)
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-22 16:08:07 +01:00
Jeong, Heon
29f9a7a0ff
Update tch to 0.15 (#443)
* Update tch to 0.15

It allows to run in system with libtorch v2.2.0, upgrading from v2.1.0

* remove torch-sys dependency from benches

* Updated readmes

---------

Co-authored-by: Guillaume Becquin <guillaume.becquin@gmail.com>
2024-02-11 17:29:50 +00:00
Jeong, Heon
b68f7dcac8
Specify required features for examples (#442)
It will make `cargo test` not to break. They are not compilable without
those features enabled.
2024-02-10 08:36:15 +00:00
guillaume-be
c3a3f39468
0.22.0 Release (#440)
* Fix Clippy warnings

* bump version, updated dependencies and changelog
2024-01-20 09:42:49 +00:00
54 changed files with 448 additions and 222 deletions

View File

@ -174,7 +174,7 @@ jobs:
with:
python-version: '3.10'
- run: |
pip install -r requirements.txt --progress-bar off
pip install -r ./utils/requirements.txt --progress-bar off
python ./utils/download-dependencies_distilbert.py
fmt:

4
.gitignore vendored
View File

@ -17,4 +17,6 @@ Cargo.lock
/target
#**/*.rs.bk
/resources/
/models/
/.venv/
convert_model.log

View File

@ -2,13 +2,18 @@
All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [Unreleased]
## Changed
- (BREAKING) Upgraded to `torch` 2.2 (via `tch` 0.15.0).
## [0.22.0] - 2024-01-20
## Added
- Addition of `new_with_tokenizer` constructor for `SentenceEmbeddingsModel` allowing passing custom tokenizers for sentence embeddings pipelines.
- Support for [Tokenizers](https://github.com/huggingface/tokenizers) in pipelines, allowing loading `tokenizer.json` and `special_token_map.json` tokenizer files.
- (BREAKING) Most model configuration can now take an optional `kind` parameter to specify the model weight precision. If not provided, will default to full precision on CPU, or the serialized weights precision otherwise.
## Fixed
- (BREAKING) Fixed the keyword extraction pipeline for n-gram sizes > 2. Add new configuration option `tokenizer_forbidden_ngram_chars` to specify characters that should be excluded from n-grams (allows filtering m-grams spanning multiple sentences).
- (BREAKING) Fixed the keyword extraction pipeline for n-gram sizes > 2. Add new configuration option `tokenizer_forbidden_ngram_chars` to specify characters that should be excluded from n-grams (allows filtering n-grams spanning multiple sentences).
- Improved MPS device compatibility setting the `sparse_grad` flag to false for `gather` operations
- Updated ONNX runtime backend version to 1.15.x
- Issue with incorrect results for QA models with a tokenizer not using segment ids
@ -447,4 +452,4 @@ All notable changes to this project will be documented in this file. The format
- Tensor conversion tools from Pytorch to Libtorch format
- DistilBERT model architecture
- Ready-to-use `SentimentClassifier` using a DistilBERT model fine-tuned on SST2
- Ready-to-use `SentimentClassifier` using a DistilBERT model fine-tuned on SST2

View File

@ -1,6 +1,6 @@
[package]
name = "rust-bert"
version = "0.21.0"
version = "0.22.0"
authors = ["Guillaume Becquin <guillaume.becquin@gmail.com>"]
edition = "2018"
description = "Ready-to-use NLP pipelines and language models"
@ -76,29 +76,63 @@ features = ["doc-only"]
[dependencies]
rust_tokenizers = "8.1.1"
tch = "0.14.0"
tch = { version = "0.16.0", features = ["download-libtorch"] }
serde_json = "1"
serde = { version = "1", features = ["derive"] }
ordered-float = "3"
ordered-float = "4.2.0"
uuid = { version = "1", features = ["v4"] }
thiserror = "1"
half = "2"
regex = "1.6"
cached-path = { version = "0.6", default-features = false, optional = true }
dirs = { version = "4", optional = true }
dirs = { version = "5", optional = true }
lazy_static = { version = "1", optional = true }
ort = {version="~1.15.2", optional = true, default-features = false, features = ["half"]}
ndarray = {version="0.15", optional = true}
tokenizers = {version="0.13.3", optional=true, default-features = false, features = ["onig"]}
ort = { version = "1.16.3", optional = true, default-features = false, features = [
"half",
] }
ndarray = { version = "0.15", optional = true }
tokenizers = { version = "0.19.1", optional = true, default-features = false, features = [
"onig",
] }
[dev-dependencies]
anyhow = "1"
csv = "1"
criterion = "0.4"
tokio = { version = "1.24", features = ["sync", "rt-multi-thread", "macros"] }
torch-sys = "0.14.0"
criterion = "0.5"
tokio = { version = "1.35", features = ["sync", "rt-multi-thread", "macros"] }
tempfile = "3"
itertools = "0.10"
tracing-subscriber = { version = "0.3", default-features = false, features = [ "env-filter", "fmt" ] }
ort = {version="~1.15.2", features = ["load-dynamic"]}
itertools = "0.13.0"
tracing-subscriber = { version = "0.3", default-features = false, features = [
"env-filter",
"fmt",
] }
ort = { version = "1.16.3", features = ["load-dynamic"] }
[[example]]
name = "onnx-masked-lm"
required-features = ["onnx"]
[[example]]
name = "onnx-question-answering"
required-features = ["onnx"]
[[example]]
name = "onnx-sequence-classification"
required-features = ["onnx"]
[[example]]
name = "onnx-text-generation"
required-features = ["onnx"]
[[example]]
name = "onnx-token-classification"
required-features = ["onnx"]
[[example]]
name = "onnx-translation"
required-features = ["onnx"]
[[example]]
name = "generation_gpt2_hf_tokenizers"
required-features = ["hf-tokenizers"]

434
README.md
View File

@ -5,10 +5,21 @@
[![Documentation](https://docs.rs/rust-bert/badge.svg)](https://docs.rs/rust-bert)
![License](https://img.shields.io/crates/l/rust_bert.svg)
Rust-native state-of-the-art Natural Language Processing models and pipelines. Port of Hugging Face's [Transformers library](https://github.com/huggingface/transformers), using [tch-rs](https://github.com/LaurentMazare/tch-rs) or [onnxruntime bindings](https://github.com/pykeio/ort) and pre-processing from [rust-tokenizers](https://github.com/guillaume-be/rust-tokenizers). Supports multi-threaded tokenization and GPU inference.
This repository exposes the model base architecture, task-specific heads (see below) and [ready-to-use pipelines](#ready-to-use-pipelines). [Benchmarks](#benchmarks) are available at the end of this document.
Rust-native state-of-the-art Natural Language Processing models and pipelines.
Port of Hugging Face's
[Transformers library](https://github.com/huggingface/transformers), using
[tch-rs](https://github.com/LaurentMazare/tch-rs) or
[onnxruntime bindings](https://github.com/pykeio/ort) and pre-processing from
[rust-tokenizers](https://github.com/guillaume-be/rust-tokenizers). Supports
multi-threaded tokenization and GPU inference. This repository exposes the model
base architecture, task-specific heads (see below) and
[ready-to-use pipelines](#ready-to-use-pipelines). [Benchmarks](#benchmarks) are
available at the end of this document.
Get started with tasks including question answering, named entity recognition,
translation, summarization, text generation, conversational agents and more in
just a few lines of code:
Get started with tasks including question answering, named entity recognition, translation, summarization, text generation, conversational agents and more in just a few lines of code:
```rust
let qa_model = QuestionAnsweringModel::new(Default::default())?;
@ -19,84 +30,104 @@ Get started with tasks including question answering, named entity recognition, t
```
Output:
```
[Answer { score: 0.9976, start: 13, end: 21, answer: "Amsterdam" }]
```
The tasks currently supported include:
- Translation
- Summarization
- Multi-turn dialogue
- Zero-shot classification
- Sentiment Analysis
- Named Entity Recognition
- Part of Speech tagging
- Question-Answering
- Language Generation
- Masked Language Model
- Sentence Embeddings
- Keywords extraction
- Translation
- Summarization
- Multi-turn dialogue
- Zero-shot classification
- Sentiment Analysis
- Named Entity Recognition
- Part of Speech tagging
- Question-Answering
- Language Generation
- Masked Language Model
- Sentence Embeddings
- Keywords extraction
<details>
<summary> <b>Expand to display the supported models/tasks matrix </b> </summary>
| |**Sequence classification**|**Token classification**|**Question answering**|**Text Generation**|**Summarization**|**Translation**|**Masked LM**|**Sentence Embeddings**|
:-----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:----:
DistilBERT|✅|✅|✅| | | |✅| ✅|
MobileBERT|✅|✅|✅| | | |✅| |
DeBERTa|✅|✅|✅| | | |✅| |
DeBERTa (v2)|✅|✅|✅| | | |✅| |
FNet|✅|✅|✅| | | |✅| |
BERT|✅|✅|✅| | | |✅| ✅|
RoBERTa|✅|✅|✅| | | |✅| ✅|
GPT| | | |✅ | | | | |
GPT2| | | |✅ | | | | |
GPT-Neo| | | |✅ | | | | |
GPT-J| | | |✅ | | | | |
BART|✅| | |✅ |✅| | | |
Marian| | | | | |✅| | |
MBart|✅| | |✅ | | | | |
M2M100| | | |✅ | | | | |
NLLB| | | |✅ | | | | |
Electra | |✅| | | | |✅| |
ALBERT |✅|✅|✅| | | |✅| ✅ |
T5 | | | |✅ |✅|✅| | ✅ |
LongT5 | | | |✅ |✅|| | |
XLNet|✅|✅|✅|✅ | | |✅| |
Reformer|✅| |✅|✅ | | |✅| |
ProphetNet| | | |✅ |✅ | | | |
Longformer|✅|✅|✅| | | |✅| |
Pegasus| | | | |✅| | | |
| | **Sequence classification** | **Token classification** | **Question answering** | **Text Generation** | **Summarization** | **Translation** | **Masked LM** | **Sentence Embeddings** |
| :----------: | :-------------------------: | :----------------------: | :--------------------: | :-----------------: | :---------------: | :-------------: | :-----------: | :---------------------: |
| DistilBERT | ✅ | ✅ | ✅ | | | | ✅ | ✅ |
| MobileBERT | ✅ | ✅ | ✅ | | | | ✅ | |
| DeBERTa | ✅ | ✅ | ✅ | | | | ✅ | |
| DeBERTa (v2) | ✅ | ✅ | ✅ | | | | ✅ | |
| FNet | ✅ | ✅ | ✅ | | | | ✅ | |
| BERT | ✅ | ✅ | ✅ | | | | ✅ | ✅ |
| RoBERTa | ✅ | ✅ | ✅ | | | | ✅ | ✅ |
| GPT | | | | ✅ | | | | |
| GPT2 | | | | ✅ | | | | |
| GPT-Neo | | | | ✅ | | | | |
| GPT-J | | | | ✅ | | | | |
| BART | ✅ | | | ✅ | ✅ | | | |
| Marian | | | | | | ✅ | | |
| MBart | ✅ | | | ✅ | | | | |
| M2M100 | | | | ✅ | | | | |
| NLLB | | | | ✅ | | | | |
| Electra | | ✅ | | | | | ✅ | |
| ALBERT | ✅ | ✅ | ✅ | | | | ✅ | ✅ |
| T5 | | | | ✅ | ✅ | ✅ | | ✅ |
| LongT5 | | | | ✅ | ✅ | | | |
| XLNet | ✅ | ✅ | ✅ | ✅ | | | ✅ | |
| Reformer | ✅ | | ✅ | ✅ | | | ✅ | |
| ProphetNet | | | | ✅ | ✅ | | | |
| Longformer | ✅ | ✅ | ✅ | | | | ✅ | |
| Pegasus | | | | | ✅ | | | |
</details>
## Getting started
This library relies on the [tch](https://github.com/LaurentMazare/tch-rs) crate for bindings to the C++ Libtorch API.
The libtorch library is required can be downloaded either automatically or manually. The following provides a reference on how to set-up your environment
to use these bindings, please refer to the [tch](https://github.com/LaurentMazare/tch-rs) for detailed information or support.
This library relies on the [tch](https://github.com/LaurentMazare/tch-rs) crate
for bindings to the C++ Libtorch API. The libtorch library is required can be
downloaded either automatically or manually. The following provides a reference
on how to set-up your environment to use these bindings, please refer to the
[tch](https://github.com/LaurentMazare/tch-rs) for detailed information or
support.
Furthermore, this library relies on a cache folder for downloading pre-trained models.
This cache location defaults to `~/.cache/.rustbert`, but can be changed by setting the `RUSTBERT_CACHE` environment variable. Note that the language models used by this library are in the order of the 100s of MBs to GBs.
Furthermore, this library relies on a cache folder for downloading pre-trained
models. This cache location defaults to `~/.cache/.rustbert`, but can be changed
by setting the `RUSTBERT_CACHE` environment variable. Note that the language
models used by this library are in the order of the 100s of MBs to GBs.
### Manual installation (recommended)
1. Download `libtorch` from https://pytorch.org/get-started/locally/. This package requires `v2.1`: if this version is no longer available on the "get started" page,
the file should be accessible by modifying the target link, for example `https://download.pytorch.org/libtorch/cu118/libtorch-cxx11-abi-shared-with-deps-2.1.1%2Bcu118.zip` for a Linux version with CUDA11. **NOTE:** When using `rust-bert` as dependency from [crates.io](https://crates.io), please check the required `LIBTORCH` on the published package [readme](https://crates.io/crates/rust-bert) as it may differ from the version documented here (applying to the current repository version).
1. Download `libtorch` from https://pytorch.org/get-started/locally/. This
package requires `v2.2`: if this version is no longer available on the "get
started" page, the file should be accessible by modifying the target link,
for example
`https://download.pytorch.org/libtorch/cu121/libtorch-cxx11-abi-shared-with-deps-2.2.0%2Bcu121.zip`
for a Linux version with CUDA12. **NOTE:** When using `rust-bert` as
dependency from [crates.io](https://crates.io), please check the required
`LIBTORCH` on the published package
[readme](https://crates.io/crates/rust-bert) as it may differ from the
version documented here (applying to the current repository version).
2. Extract the library to a location of your choice
3. Set the following environment variables
##### Linux:
```bash
export LIBTORCH=/path/to/libtorch
export LD_LIBRARY_PATH=${LIBTORCH}/lib:$LD_LIBRARY_PATH
```
##### Windows
```powershell
$Env:LIBTORCH = "X:\path\to\libtorch"
$Env:Path += ";X:\path\to\libtorch\lib"
```
#### macOS + Homebrew
```bash
brew install pytorch jq
export LIBTORCH=$(brew --cellar pytorch)/$(brew info --json pytorch | jq -r '.[0].installed[0].version')
@ -105,13 +136,19 @@ export LD_LIBRARY_PATH=${LIBTORCH}/lib:$LD_LIBRARY_PATH
### Automatic installation
Alternatively, you can let the `build` script automatically download the `libtorch` library for you. The `download-libtorch` feature flag needs to be enabled.
The CPU version of libtorch will be downloaded by default. To download a CUDA version, please set the environment variable `TORCH_CUDA_VERSION` to `cu118`.
Note that the libtorch library is large (order of several GBs for the CUDA-enabled version) and the first build may therefore take several minutes to complete.
Alternatively, you can let the `build` script automatically download the
`libtorch` library for you. The `download-libtorch` feature flag needs to be
enabled. The CPU version of libtorch will be downloaded by default. To download
a CUDA version, please set the environment variable `TORCH_CUDA_VERSION` to
`cu118`. Note that the libtorch library is large (order of several GBs for the
CUDA-enabled version) and the first build may therefore take several minutes to
complete.
### Verifying installation
Verify your installation (and linking with libtorch) by adding the `rust-bert` dependency to your `Cargo.toml` or by cloning the rust-bert source and running an example:
Verify your installation (and linking with libtorch) by adding the `rust-bert`
dependency to your `Cargo.toml` or by cloning the rust-bert source and running
an example:
```bash
git clone git@github.com:guillaume-be/rust-bert.git
@ -121,41 +158,73 @@ cargo run --example sentence_embeddings
## ONNX Support (Optional)
ONNX support can be enabled via the optional `onnx` feature. This crate then leverages the [ort](https://github.com/pykeio/ort) crate with bindings to the onnxruntime C++ library. We refer the user to this page project for further installation instructions/support.
1. Enable the optional `onnx` feature. The `rust-bert` crate does not include any optional dependencies for `ort`, the end user should select the set of features that would be adequate for pulling the required `onnxruntime` C++ library.
2. The current recommended installation is to use dynamic linking by pointing to an existing library location. Use the `load-dynamic` cargo feature for `ort`.
3. set the `ORT_DYLIB_PATH` to point to the location of downloaded onnxruntime library (`onnxruntime.dll`/`libonnxruntime.so`/`libonnxruntime.dylib` depending on the operating system). These can be downloaded from the [release page](https://github.com/microsoft/onnxruntime/releases) of the onnxruntime project
ONNX support can be enabled via the optional `onnx` feature. This crate then
leverages the [ort](https://github.com/pykeio/ort) crate with bindings to the
onnxruntime C++ library. We refer the user to this page project for further
installation instructions/support.
Most architectures (including encoders, decoders and encoder-decoders) are supported. the library aims at keeping compatibility with models exported using the [optimum](https://github.com/huggingface/optimum) library. A detailed guide on how to export a Transformer model to ONNX using optimum is available at https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model
The resources used to create ONNX models are similar to those based on Pytorch, replacing the pytorch by the ONNX model. Since ONNX models are less flexible than their Pytorch counterparts in the handling of optional arguments, exporting a decoder or encoder-decoder model to ONNX will usually result in multiple files. These files are expected (but not all are necessary) for use in this library as per the table below:
1. Enable the optional `onnx` feature. The `rust-bert` crate does not include
any optional dependencies for `ort`, the end user should select the set of
features that would be adequate for pulling the required `onnxruntime` C++
library.
2. The current recommended installation is to use dynamic linking by pointing to
an existing library location. Use the `load-dynamic` cargo feature for `ort`.
3. set the `ORT_DYLIB_PATH` to point to the location of downloaded onnxruntime
library (`onnxruntime.dll`/`libonnxruntime.so`/`libonnxruntime.dylib`
depending on the operating system). These can be downloaded from the
[release page](https://github.com/microsoft/onnxruntime/releases) of the
onnxruntime project
| Architecture | Encoder file | Decoder without past file | Decoder with past file |
|-----------------------------|---------------|---------------------------|-------------------------|
| Encoder (e.g. BERT) | required | not used | not used |
| Decoder (e.g. GPT2) | not used | required | optional |
| Encoder-decoder (e.g. BART) | required | required | optional |
Most architectures (including encoders, decoders and encoder-decoders) are
supported. the library aims at keeping compatibility with models exported using
the [optimum](https://github.com/huggingface/optimum) library. A detailed guide
on how to export a Transformer model to ONNX using optimum is available at
https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model
The resources used to create ONNX models are similar to those based on Pytorch,
replacing the pytorch by the ONNX model. Since ONNX models are less flexible
than their Pytorch counterparts in the handling of optional arguments, exporting
a decoder or encoder-decoder model to ONNX will usually result in multiple
files. These files are expected (but not all are necessary) for use in this
library as per the table below:
Note that the computational efficiency will drop when the `decoder with past` file is optional but not provided
since the model will not used cached past keys and values for the attention mechanism, leading to a high number of
redundant computations. The Optimum library offers export options to ensure such a `decoder with past` model file is created.
he base encoder and decoder model architecture are available (and exposed for convenience) in the `encoder` and `decoder` modules, respectively.
| Architecture | Encoder file | Decoder without past file | Decoder with past file |
| --------------------------- | ------------ | ------------------------- | ---------------------- |
| Encoder (e.g. BERT) | required | not used | not used |
| Decoder (e.g. GPT2) | not used | required | optional |
| Encoder-decoder (e.g. BART) | required | required | optional |
Generation models (pure decoder or encoder/decoder architectures) are available in the `models` module.
ost pipelines are available for ONNX model checkpoints, including sequence classification, zero-shot classification,
token classification (including named entity recognition and part-of-speech tagging), question answering, text generation, summarization and translation.
These models use the same configuration and tokenizer files as their Pytorch counterparts when used in a pipeline. Examples leveraging ONNX models are given in the `./examples` directory
Note that the computational efficiency will drop when the `decoder with past`
file is optional but not provided since the model will not used cached past keys
and values for the attention mechanism, leading to a high number of redundant
computations. The Optimum library offers export options to ensure such a
`decoder with past` model file is created. he base encoder and decoder model
architecture are available (and exposed for convenience) in the `encoder` and
`decoder` modules, respectively.
Generation models (pure decoder or encoder/decoder architectures) are available
in the `models` module. ost pipelines are available for ONNX model checkpoints,
including sequence classification, zero-shot classification, token
classification (including named entity recognition and part-of-speech tagging),
question answering, text generation, summarization and translation. These models
use the same configuration and tokenizer files as their Pytorch counterparts
when used in a pipeline. Examples leveraging ONNX models are given in the
`./examples` directory
## Ready-to-use pipelines
Based on Hugging Face's pipelines, ready to use end-to-end NLP pipelines are available as part of this crate. The following capabilities are currently available:
**Disclaimer**
The contributors of this repository are not responsible for any generation from the 3rd party utilization of the pretrained systems proposed herein.
Based on Hugging Face's pipelines, ready to use end-to-end NLP pipelines are
available as part of this crate. The following capabilities are currently
available:
**Disclaimer** The contributors of this repository are not responsible for any
generation from the 3rd party utilization of the pretrained systems proposed
herein.
<details>
<summary> <b>1. Question Answering</b> </summary>
Extractive question answering from a given question and context. DistilBERT model fine-tuned on SQuAD (Stanford Question Answering Dataset)
Extractive question answering from a given question and context. DistilBERT
model fine-tuned on SQuAD (Stanford Question Answering Dataset)
```rust
let qa_model = QuestionAnsweringModel::new(Default::default())?;
@ -167,20 +236,27 @@ Extractive question answering from a given question and context. DistilBERT mode
```
Output:
```
[Answer { score: 0.9976, start: 13, end: 21, answer: "Amsterdam" }]
```
</details>
&nbsp;
&nbsp;
<details>
<summary> <b>2. Translation </b> </summary>
Translation pipeline supporting a broad range of source and target languages. Leverages two main architectures for translation tasks:
- Marian-based models, for specific source/target combinations
- M2M100 models allowing for direct translation between 100 languages (at a higher computational cost and lower performance for some selected languages)
Translation pipeline supporting a broad range of source and target languages.
Leverages two main architectures for translation tasks:
- Marian-based models, for specific source/target combinations
- M2M100 models allowing for direct translation between 100 languages (at a
higher computational cost and lower performance for some selected languages)
Marian-based pretrained models for the following language pairs are readily
available in the library - but the user can import any Pytorch-based model for
predictions
Marian-based pretrained models for the following language pairs are readily available in the library - but the user can import any Pytorch-based
model for predictions
- English <-> French
- English <-> Spanish
- English <-> Portuguese
@ -196,30 +272,36 @@ model for predictions
- English <-> Hindi
- French <-> German
For languages not supported by the proposed pretrained Marian models, the user can leverage a M2M100 model supporting direct translation between 100 languages (without intermediate English translation)
The full list of supported languages is available in the [crate documentation](https://docs.rs/rust-bert/latest/rust_bert/pipelines/translation/enum.Language.html)
For languages not supported by the proposed pretrained Marian models, the user
can leverage a M2M100 model supporting direct translation between 100 languages
(without intermediate English translation) The full list of supported languages
is available in the
[crate documentation](https://docs.rs/rust-bert/latest/rust_bert/pipelines/translation/enum.Language.html)
```rust
use rust_bert::pipelines::translation::{Language, TranslationModelBuilder};
fn main() -> anyhow::Result<()> {
let model = TranslationModelBuilder::new()
.with_source_languages(vec![Language::English])
.with_target_languages(vec![Language::Spanish, Language::French, Language::Italian])
.create_model()?;
let input_text = "This is a sentence to be translated";
let output = model.translate(&[input_text], None, Language::French)?;
for sentence in output {
println!("{}", sentence);
}
Ok(())
}
```
```rust
use rust_bert::pipelines::translation::{Language, TranslationModelBuilder};
fn main() -> anyhow::Result<()> {
let model = TranslationModelBuilder::new()
.with_source_languages(vec![Language::English])
.with_target_languages(vec![Language::Spanish, Language::French, Language::Italian])
.create_model()?;
let input_text = "This is a sentence to be translated";
let output = model.translate(&[input_text], None, Language::Spanish)?;
for sentence in output {
println!("{}", sentence);
}
Ok(())
}
```
Output:
```
Il s'agit d'une phrase à traduire
```
</details>
&nbsp;
&nbsp;
<details>
<summary> <b>3. Summarization </b> </summary>
@ -252,26 +334,35 @@ about exoplanets like K2-18b."];
let output = summarization_model.summarize(&input);
```
(example from: [WikiNews](https://en.wikinews.org/wiki/Astronomers_find_water_vapour_in_atmosphere_of_exoplanet_K2-18b))
(example from:
[WikiNews](https://en.wikinews.org/wiki/Astronomers_find_water_vapour_in_atmosphere_of_exoplanet_K2-18b))
Output:
```
"Scientists have found water vapour on K2-18b, a planet 110 light-years from Earth.
This is the first such discovery in a planet in its star's habitable zone.
The planet is not too hot and not too cold for liquid water to exist."
```
</details>
&nbsp;
&nbsp;
<details>
<summary> <b>4. Dialogue Model </b> </summary>
Conversation model based on Microsoft's [DialoGPT](https://github.com/microsoft/DialoGPT).
This pipeline allows the generation of single or multi-turn conversations between a human and a model.
Conversation model based on Microsoft's
[DialoGPT](https://github.com/microsoft/DialoGPT). This pipeline allows the
generation of single or multi-turn conversations between a human and a model.
The DialoGPT's page states that
> The human evaluation results indicate that the response generated from DialoGPT is comparable to human response quality
> under a single-turn conversation Turing test. ([DialoGPT repository](https://github.com/microsoft/DialoGPT))
The model uses a `ConversationManager` to keep track of active conversations and generate responses to them.
> The human evaluation results indicate that the response generated from
> DialoGPT is comparable to human response quality under a single-turn
> conversation Turing test.
> ([DialoGPT repository](https://github.com/microsoft/DialoGPT))
The model uses a `ConversationManager` to keep track of active conversations and
generate responses to them.
```rust
use rust_bert::pipelines::conversation::{ConversationModel, ConversationManager};
@ -282,19 +373,24 @@ let mut conversation_manager = ConversationManager::new();
let conversation_id = conversation_manager.create("Going to the movies tonight - any suggestions?");
let output = conversation_model.generate_responses(&mut conversation_manager);
```
Example output:
```
"The Big Lebowski."
```
</details>
&nbsp;
&nbsp;
<details>
<summary> <b>5. Natural Language Generation </b> </summary>
Generate language based on a prompt. GPT2 and GPT available as base models.
Include techniques such as beam search, top-k and nucleus sampling, temperature setting and repetition penalty.
Supports batch generation of sentences from several prompts. Sequences will be left-padded with the model's padding token if present, the unknown token otherwise.
This may impact the results, it is recommended to submit prompts of similar length for best results
Include techniques such as beam search, top-k and nucleus sampling, temperature
setting and repetition penalty. Supports batch generation of sentences from
several prompts. Sequences will be left-padded with the model's padding token if
present, the unknown token otherwise. This may impact the results, it is
recommended to submit prompts of similar length for best results
```rust
let model = GPT2Generator::new(Default::default())?;
@ -309,7 +405,9 @@ This may impact the results, it is recommended to submit prompts of similar leng
let output = model.generate(Some(&[input_context_1, input_context_2]), generate_options);
```
Example output:
```
[
"The dog's owners, however, did not want to be named. According to the lawsuit, the animal's owner, a 29-year"
@ -320,12 +418,15 @@ Example output:
"The cat was attacked by two stray dogs and was taken to a hospital. Two other cats were also injured in the attack and are being treated."
]
```
</details>
&nbsp;
&nbsp;
<details>
<summary> <b>6. Zero-shot classification </b> </summary>
Performs zero-shot classification on input sentences with provided labels using a model fine-tuned for Natural Language Inference.
Performs zero-shot classification on input sentences with provided labels using
a model fine-tuned for Natural Language Inference.
```rust
let sequence_classification_model = ZeroShotClassificationModel::new(Default::default())?;
@ -342,18 +443,22 @@ Performs zero-shot classification on input sentences with provided labels using
```
Output:
```
[
[ Label { "politics", score: 0.972 }, Label { "public health", score: 0.032 }, Label {"economics", score: 0.006 }, Label {"sports", score: 0.004 } ],
[ Label { "politics", score: 0.975 }, Label { "public health", score: 0.0818 }, Label {"economics", score: 0.852 }, Label {"sports", score: 0.001 } ],
]
```
</details>
&nbsp;
&nbsp;
<details>
<summary> <b>7. Sentiment analysis </b> </summary>
Predicts the binary sentiment for a sentence. DistilBERT model fine-tuned on SST-2.
Predicts the binary sentiment for a sentence. DistilBERT model fine-tuned on
SST-2.
```rust
let sentiment_classifier = SentimentModel::new(Default::default())?;
@ -365,9 +470,11 @@ Predicts the binary sentiment for a sentence. DistilBERT model fine-tuned on SST
let output = sentiment_classifier.predict(&input);
```
(Example courtesy of [IMDb](http://www.imdb.com))
Output:
```
[
Sentiment { polarity: Positive, score: 0.9981985493795946 },
@ -375,13 +482,17 @@ Output:
Sentiment { polarity: Positive, score: 0.9997248985164333 }
]
```
</details>
&nbsp;
&nbsp;
<details>
<summary> <b>8. Named Entity Recognition </b> </summary>
Extracts entities (Person, Location, Organization, Miscellaneous) from text. BERT cased large model fine-tuned on CoNNL03, contributed by the [MDZ Digital Library team at the Bavarian State Library](https://github.com/dbmdz).
Extracts entities (Person, Location, Organization, Miscellaneous) from text.
BERT cased large model fine-tuned on CoNNL03, contributed by the
[MDZ Digital Library team at the Bavarian State Library](https://github.com/dbmdz).
Models are currently available for English, German, Spanish and Dutch.
```rust
let ner_model = NERModel::new(default::default())?;
@ -392,7 +503,9 @@ Models are currently available for English, German, Spanish and Dutch.
let output = ner_model.predict(&input);
```
Output:
```
[
[
@ -405,8 +518,9 @@ Output:
]
]
```
</details>
&nbsp;
&nbsp;
<details>
<summary> <b>9. Keywords/keyphrases extraction</b> </summary>
@ -427,7 +541,9 @@ fn main() -> anyhow::Result<()> {
let output = keyword_extraction_model.predict(&[input])?;
}
```
Output:
```
"rust" - 0.50910604
"programming" - 0.35731024
@ -435,12 +551,14 @@ Output:
"concurrent" - 0.31229728
"program" - 0.29115444
```
</details>
&nbsp;
&nbsp;
<details>
<summary> <b>10. Part of Speech tagging </b> </summary>
Extracts Part of Speech tags (Noun, Verb, Adjective...) from text.
```rust
let pos_model = POSModel::new(default::default())?;
@ -448,7 +566,9 @@ Extracts Part of Speech tags (Noun, Verb, Adjective...) from text.
let output = pos_model.predict(&input);
```
Output:
```
[
Entity { word: "My", score: 0.1560, label: "PRP" }
@ -457,12 +577,15 @@ Output:
Entity { word: "Bob", score: 0.7460, label: "NNP" }
]
```
</details>
&nbsp;
&nbsp;
<details>
<summary> <b>11. Sentence embeddings </b> </summary>
Generate sentence embeddings (vector representation). These can be used for applications including dense information retrieval.
Generate sentence embeddings (vector representation). These can be used for
applications including dense information retrieval.
```rust
let model = SentenceEmbeddingsBuilder::remote(
SentenceEmbeddingsModelType::AllMiniLmL12V2
@ -475,19 +598,23 @@ Generate sentence embeddings (vector representation). These can be used for appl
let output = model.encode(&sentences)?;
```
Output:
```
[
[-0.000202666, 0.08148022, 0.03136178, 0.002920636 ...],
[0.064757116, 0.048519745, -0.01786038, -0.0479775 ...]
]
```
</details>
&nbsp;
&nbsp;
<details>
<summary> <b>12. Masked Language Model </b> </summary>
Predict masked words in input sentences.
```rust
let model = MaskedLanguageModel::new(Default::default())?;
@ -498,7 +625,9 @@ Predict masked words in input sentences.
let output = model.predict(&sentences);
```
Output:
```
[
[MaskedToken { text: "college", id: 2267, score: 8.091}],
@ -508,29 +637,61 @@ Output:
]
]
```
</details>
## Benchmarks
For simple pipelines (sequence classification, tokens classification, question answering) the performance between Python and Rust is expected to be comparable. This is because the most expensive part of these pipeline is the language model itself, sharing a common implementation in the Torch backend. The [End-to-end NLP Pipelines in Rust](https://www.aclweb.org/anthology/2020.nlposs-1.4/) provides a benchmarks section covering all pipelines.
For simple pipelines (sequence classification, tokens classification, question
answering) the performance between Python and Rust is expected to be comparable.
This is because the most expensive part of these pipeline is the language model
itself, sharing a common implementation in the Torch backend. The
[End-to-end NLP Pipelines in Rust](https://www.aclweb.org/anthology/2020.nlposs-1.4/)
provides a benchmarks section covering all pipelines.
For text generation tasks (summarization, translation, conversation, free text generation), significant benefits can be expected (up to 2 to 4 times faster processing depending on the input and application). The article [Accelerating text generation with Rust](https://guillaume-be.github.io/2020-11-21/generation_benchmarks) focuses on these text generation applications and provides more details on the performance comparison to Python.
For text generation tasks (summarization, translation, conversation, free text
generation), significant benefits can be expected (up to 2 to 4 times faster
processing depending on the input and application). The article
[Accelerating text generation with Rust](https://guillaume-be.github.io/2020-11-21/generation_benchmarks)
focuses on these text generation applications and provides more details on the
performance comparison to Python.
## Loading pretrained and custom model weights
The base model and task-specific heads are also available for users looking to expose their own transformer based models.
Examples on how to prepare the date using a native tokenizers Rust library are available in `./examples` for BERT, DistilBERT, RoBERTa, GPT, GPT2 and BART.
Note that when importing models from Pytorch, the convention for parameters naming needs to be aligned with the Rust schema. Loading of the pre-trained weights will fail if any of the model parameters weights cannot be found in the weight files.
If this quality check is to be skipped, an alternative method `load_partial` can be invoked from the variables store.
The base model and task-specific heads are also available for users looking to
expose their own transformer based models. Examples on how to prepare the date
using a native tokenizers Rust library are available in `./examples` for BERT,
DistilBERT, RoBERTa, GPT, GPT2 and BART. Note that when importing models from
Pytorch, the convention for parameters naming needs to be aligned with the Rust
schema. Loading of the pre-trained weights will fail if any of the model
parameters weights cannot be found in the weight files. If this quality check is
to be skipped, an alternative method `load_partial` can be invoked from the
variables store.
Pretrained models are available on Hugging face's [model hub](https://huggingface.co/models?filter=rust) and can be loaded using `RemoteResources` defined in this library.
A conversion utility script is included in `./utils` to convert Pytorch weights to a set of weights compatible with this library. This script requires Python and `torch` to be set-up, and can be used as follows:
`python ./utils/convert_model.py path/to/pytorch_model.bin` where `path/to/pytorch_model.bin` is the location of the original Pytorch weights.
Pretrained models are available on Hugging face's
[model hub](https://huggingface.co/models?filter=rust) and can be loaded using
`RemoteResources` defined in this library.
A conversion utility script is included in `./utils` to convert Pytorch weights
to a set of weights compatible with this library. This script requires Python
and `torch` to be set-up, and can be used as follows:
`python ./utils/convert_model.py path/to/pytorch_model.bin` where
`path/to/pytorch_model.bin` is the location of the original Pytorch weights.
```bash
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python utils/convert_model.py path/to/pytorch_model.bin
```
## Citation
If you use `rust-bert` for your work, please cite [End-to-end NLP Pipelines in Rust](https://www.aclweb.org/anthology/2020.nlposs-1.4/):
If you use `rust-bert` for your work, please cite
[End-to-end NLP Pipelines in Rust](https://www.aclweb.org/anthology/2020.nlposs-1.4/):
```bibtex
@inproceedings{becquin-2020-end,
title = "End-to-end {NLP} Pipelines in Rust",
@ -545,6 +706,7 @@ If you use `rust-bert` for your work, please cite [End-to-end NLP Pipelines in R
## Acknowledgements
Thank you to [Hugging Face](https://huggingface.co) for hosting a set of weights compatible with this Rust library.
The list of ready-to-use pretrained models is listed at [https://huggingface.co/models?filter=rust](https://huggingface.co/models?filter=rust).
Thank you to [Hugging Face](https://huggingface.co) for hosting a set of weights
compatible with this Rust library. The list of ready-to-use pretrained models is
listed at
[https://huggingface.co/models?filter=rust](https://huggingface.co/models?filter=rust).

View File

@ -53,10 +53,6 @@ fn generation_forward_pass(iters: u64, model: &TextGenerationModel, data: &[&str
}
fn bench_generation(c: &mut Criterion) {
// Set-up summarization model
unsafe {
torch_sys::dummy_cuda_dependency();
}
let model = create_text_generation_model();
// Define input

View File

@ -73,9 +73,7 @@ fn qa_load_model(iters: u64) -> Duration {
fn bench_squad(c: &mut Criterion) {
// Set-up QA model
let model = create_qa_model();
unsafe {
torch_sys::dummy_cuda_dependency();
}
// Define input
let mut squad_path = PathBuf::from(env::var("squad_dataset")
.expect("Please set the \"squad_dataset\" environment variable pointing to the SQuAD dataset folder"));

View File

@ -79,9 +79,7 @@ fn sst2_load_model(iters: u64) -> Duration {
fn bench_sst2(c: &mut Criterion) {
// Set-up classifier
let model = create_sentiment_model();
unsafe {
torch_sys::dummy_cuda_dependency();
}
// Define input
let mut sst2_path = PathBuf::from(env::var("SST2_PATH").expect(
"Please set the \"SST2_PATH\" environment variable pointing to the SST2 dataset folder",

View File

@ -40,9 +40,6 @@ fn summarization_load_model(iters: u64) -> Duration {
fn bench_squad(c: &mut Criterion) {
// Set-up summarization model
unsafe {
torch_sys::dummy_cuda_dependency();
}
let model = create_summarization_model();
// Define input

View File

@ -17,10 +17,6 @@ fn matrix_multiply(iters: u64, input: &Tensor, weights: &Tensor) -> Duration {
}
fn bench_tensor_ops(c: &mut Criterion) {
// Set-up summarization model
unsafe {
torch_sys::dummy_cuda_dependency();
}
let input = Tensor::rand([32, 128, 512], (Kind::Float, Device::cuda_if_available()));
let weights = Tensor::rand([512, 512], (Kind::Float, Device::cuda_if_available()));

View File

@ -14,9 +14,6 @@ fn create_model() -> TokenClassificationModel {
fn bench_token_classification_predict(c: &mut Criterion) {
// Set-up model
unsafe {
torch_sys::dummy_cuda_dependency();
}
let model = create_model();
// Define input

View File

@ -73,9 +73,6 @@ fn translation_load_model(iters: u64) -> Duration {
fn bench_squad(c: &mut Criterion) {
// Set-up translation model
unsafe {
torch_sys::dummy_cuda_dependency();
}
let model = create_translation_model();
// Define input

View File

@ -1,3 +0,0 @@
torch == 1.13.1
requests == 2.31.0
numpy == 1.23.4

View File

@ -90,8 +90,8 @@
//!
//! ### Manual installation (recommended)
//!
//! 1. Download `libtorch` from <https://pytorch.org/get-started/locally/>. This package requires `v2.1`: if this version is no longer available on the "get started" page,
//! the file should be accessible by modifying the target link, for example `https://download.pytorch.org/libtorch/cu118/libtorch-cxx11-abi-shared-with-deps-2.1.1%2Bcu118.zip` for a Linux version with CUDA11.
//! 1. Download `libtorch` from <https://pytorch.org/get-started/locally/>. This package requires `v2.2`: if this version is no longer available on the "get started" page,
//! the file should be accessible by modifying the target link, for example `https://download.pytorch.org/libtorch/cu121/libtorch-cxx11-abi-shared-with-deps-2.2.0%2Bcu121.zip` for a Linux version with CUDA12.
//! 2. Extract the library to a location of your choice
//! 3. Set the following environment variables
//! ##### Linux:

View File

@ -16,6 +16,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `BertTokenizer` using a `vocab.txt` vocabulary
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -369,7 +369,7 @@ fn _shift_tokens_right(input_ids: &Tensor, pad_token_id: i64) -> Tensor {
/// It is made of the following blocks:
/// - `encoder`: `BartEncoder` (transformer) made of a vector of encoding layers
/// - `decoder`: `BartDecoder` (transformer) made of a vector of decoding layers with self attention and encoder cross-attention.
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// - `pad_token_id`: padding token id
pub struct BartModel {
pub(crate) encoder: BartEncoder,
@ -437,7 +437,7 @@ impl BartModel {
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
///
@ -597,7 +597,7 @@ impl BartForConditionalGeneration {
/// * `input_ids` - Optional input tensor of shape (*batch size*, *source_sequence_length*). Must be provided when not running in generation mode
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
@ -798,7 +798,7 @@ impl BartForSequenceClassification {
/// * `input_ids` - Optional input tensor of shape (*batch size*, *source_sequence_length*). Must be provided when not running in generation mode
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.

View File

@ -340,6 +340,7 @@ impl BartDecoder {
}
}
#[allow(dead_code)]
///Container holding a BART decoder output
pub struct BartDecoderOutput {
/// last decoder layer hidden state

View File

@ -11,6 +11,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `RobertaTokenizer` using a `vocab.txt` vocabulary and `merges.txt` 2-gram merges
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -16,6 +16,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `BertTokenizer` using a `vocab.txt` vocabulary
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -12,6 +12,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `DebertaTokenizer` using a `vocab.json` vocabulary and `merges.txt` merges file
//!
//! Pretrained models for a number of language pairs are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -12,6 +12,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `DebertaV2Tokenizer` using a `spiece.model` SentencePiece model file
//!
//! Pretrained models for a number of language pairs are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -14,6 +14,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `BertTokenizer` using a `vocab.txt` vocabulary
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -19,6 +19,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `BertTokenizer` using a `vocab.txt` vocabulary
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -14,6 +14,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `FNetTokenizer` using a `spiece.model` SentencePiece (BPE) model file
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -11,6 +11,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `Gpt2Tokenizer` using a `vocab.txt` vocabulary and `merges.txt` 2-gram merges
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -174,7 +174,7 @@ impl From<&LongT5Config> for T5Config {
/// It is made of the following blocks:
/// - `encoder`: `T5Stack` (transformer) made of a vector of encoding layers
/// - `decoder`: `T5Stack` (transformer) made of a vector of decoding layers with self attention and encoder cross-attention.
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// - `embeddings`: `nn::Embedding` Shared embeddings for the encoder and decoder.
pub struct LongT5Model {
pub(crate) encoder: LongT5Stack,
@ -248,7 +248,7 @@ impl LongT5Model {
/// * `input_ids` - Optional input tensor of shape (*batch size*, *source_sequence_length*). This or `input_embeds` must be provided.
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). This or `decoder_input_embeds` must be provided.
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `input_embeds` - Optional input tensor of shape (*batch size*, *source_sequence_length*, *embeddings dimension*). This or `input_ids` must be provided.
@ -436,7 +436,7 @@ impl LongT5ForConditionalGeneration {
/// * `input_ids` - Optional input tensor of shape (*batch size*, *source_sequence_length*). This or `input_embeds` must be provided.
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). This or `decoder_input_embeds` must be provided.
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `input_embeds` - Optional input tensor of shape (*batch size*, *source_sequence_length*, *embeddings dimension*). This or `input_ids` must be provided.

View File

@ -126,7 +126,7 @@ fn _shift_tokens_right(
/// It is made of the following blocks:
/// - `encoder`: `M2M100Encoder` (transformer) made of a vector of encoding layers
/// - `decoder`: `M2M100Decoder` (transformer) made of a vector of decoding layers with self attention and encoder cross-attention.
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// - `pad_token_id`: padding token id
pub struct M2M100Model {
pub(crate) encoder: M2M100Encoder,
@ -197,7 +197,7 @@ impl M2M100Model {
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
///
@ -365,7 +365,7 @@ impl M2M100ForConditionalGeneration {
/// * `input_ids` - Optional input tensor of shape (*batch size*, *source_sequence_length*). Must be provided when not running in generation mode
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.

View File

@ -12,6 +12,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `M2M100Tokenizer` using a `config.json` vocabulary and a `spiece.model` SentencePiece BPE model
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -579,7 +579,7 @@ impl MarianForConditionalGeneration {
/// * `input_ids` - Optional input tensor of shape (*batch size*, *source_sequence_length*). Must be provided when not running in generation mode
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.

View File

@ -229,7 +229,7 @@ impl MBartClassificationHead {
/// It is made of the following blocks:
/// - `encoder`: `MBartEncoder` (transformer) made of a vector of encoding layers
/// - `decoder`: `MBartDecoder` (transformer) made of a vector of decoding layers with self attention and encoder cross-attention.
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// - `pad_token_id`: padding token id
pub struct MBartModel {
pub(crate) encoder: MBartEncoder,
@ -297,7 +297,7 @@ impl MBartModel {
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
///
@ -470,7 +470,7 @@ impl MBartForConditionalGeneration {
/// * `input_ids` - Optional input tensor of shape (*batch size*, *source_sequence_length*). Must be provided when not running in generation mode
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
@ -621,7 +621,7 @@ impl MBartForSequenceClassification {
/// * `input_ids` - Optional input tensor of shape (*batch size*, *source_sequence_length*). Must be provided when not running in generation mode
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.

View File

@ -11,6 +11,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `MBart50Tokenizer` using a `spiece.model` SentencePiece model
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -13,6 +13,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `BertTokenizer` using a `vocab.txt` vocabulary
//!
//! Pretrained models for a number of language pairs are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -10,6 +10,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `GptTokenizer` using a `vocab.txt` vocabulary and `merges.txt` 2-gram merges
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -11,6 +11,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `PegasusTokenizer` using a `spiece.model` vocabulary and unigram model.
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -87,7 +87,7 @@ fn _shift_tokens_right(
/// It is made of the following blocks:
/// - `encoder`: `PegasusEncoder` (transformer) made of a vector of encoding layers
/// - `decoder`: `PegasusDecoder` (transformer) made of a vector of decoding layers with self attention and encoder cross-attention.
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
pub struct PegasusModel {
pub(crate) encoder: PegasusEncoder,
decoder: PegasusDecoder,
@ -152,7 +152,7 @@ impl PegasusModel {
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
///
@ -322,7 +322,7 @@ impl PegasusForConditionalGeneration {
/// * `input_ids` - Optional input tensor of shape (*batch size*, *source_sequence_length*). Must be provided when not running in generation mode
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.

View File

@ -210,6 +210,7 @@ impl ProphetNetEncoder {
}
}
#[allow(dead_code)]
/// Container for the ProphetNet encoder output.
pub struct ProphetNetEncoderOutput {
/// Last hidden states from the model

View File

@ -224,7 +224,7 @@ impl ProphetNetModel {
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `encoder_hidden_states` - Optional tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) corresponding to pre-calculated encoder hidden states (useful for conditional generation)
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `old_layer_states` - Optional Vector `Option<Vec<Option<&LayerState>, Option<&LayerState>>>` of length *n_layer* containing tuples with the past keys and values for both the self attention and the encoder cross attention of each layer of the decoder.
/// * `decoder_input_embeds` - Optional input tensor of shape (*batch size*, *target_sequence_length*, *embeddings dimension*). This or `decoder_input_ids` must be provided.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
@ -431,7 +431,7 @@ impl ProphetNetForConditionalGeneration {
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). Must be provided when running in generation mode (e.g. initialized with a BOS token)
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `encoder_hidden_states` - Optional tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) corresponding to pre-calculated encoder hidden states (useful for conditional generation)
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `old_layer_states` - Optional Vector `Option<Vec<Option<&LayerState>, Option<&LayerState>>>` of length *n_layer* containing tuples with the past keys and values for both the self attention and the encoder cross attention of each layer of the decoder.
/// * `decoder_input_embeds` - Optional input tensor of shape (*batch size*, *target_sequence_length*, *embeddings dimension*). This or `decoder_input_ids` must be provided.
/// * `train` - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.

View File

@ -143,6 +143,7 @@ impl ChunkReformerFeedForward {
}
}
#[allow(dead_code)]
pub struct ReformerLayerOutput {
pub attention_output: Tensor,
pub hidden_states: Tensor,

View File

@ -11,6 +11,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `ReformerTokenizer` using a `spiece.model` BPE model
//!
//! Pretrained models on "Crime and Punishment" (Dostoevsky) are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -207,6 +207,7 @@ impl ReformerLMHead {
}
}
#[allow(dead_code)]
pub struct PaddedReformerInput {
pub input_ids: Option<Tensor>,
pub input_embeds: Option<Tensor>,
@ -220,7 +221,7 @@ pub struct PaddedReformerInput {
/// It is made of the following blocks:
/// - `embeddings`: `ReformerEmbeddings` Reformer embeddings, combining word and position embeddings
/// - `encoder`: `ReformerEncoder` (transformer) made of a vector of Reformer layer with local or LSH attention.
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// - `least_common_mult_chunk_length`: least common chunk length for all attention layers
/// - `min_chunk_length`: minimum chunk length for all attention layers
/// - `pad_token_id`: padding token id used to pad to chunk length multiple if input is long enough to be chunked.

View File

@ -15,6 +15,7 @@
//! - Configuration file expected to have a structure following the [Transformers library](https://github.com/huggingface/transformers)
//! - Model weights are expected to have a structure and parameter names following the [Transformers library](https://github.com/huggingface/transformers). A conversion using the Python utility scripts is required to convert the `.bin` weights to the `.ot` format.
//! - `RobertaTokenizer` using a `vocab.txt` vocabulary and `merges.txt` 2-gram merges
//!
//! Pretrained models are available and can be downloaded using RemoteResources.
//!
//! ```no_run

View File

@ -541,6 +541,7 @@ impl T5Stack {
}
}
#[allow(dead_code)]
pub struct T5BlockOutput {
pub hidden_states: Tensor,
pub self_attention_weights: Option<Tensor>,

View File

@ -237,7 +237,7 @@ impl Default for T5Config {
/// It is made of the following blocks:
/// - `encoder`: `T5Stack` (transformer) made of a vector of encoding layers
/// - `decoder`: `T5Stack` (transformer) made of a vector of decoding layers with self attention and encoder cross-attention.
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)
/// - `embeddings`: `nn::Embedding` Shared embeddings for the encoder and decoder.
pub struct T5Model {
pub(crate) encoder: T5Stack,
@ -312,7 +312,7 @@ impl T5Model {
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). This or `decoder_input_embeds` must be provided.
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `input_embeds` - Optional input tensor of shape (*batch size*, *source_sequence_length*, *embeddings dimension*). This or `input_ids` must be provided.
/// * `decoder_input_embeds` - Optional input tensor of shape (*batch size*, *target_sequence_length*, *embeddings dimension*). This or `decoder_input_ids` must be provided.
@ -509,7 +509,7 @@ impl T5ForConditionalGeneration {
/// * `attention_mask` - Optional attention mask of shape (*batch size*, *source_sequence_length*) for the encoder positions. Positions with a mask with value 0 will be masked.
/// * `decoder_input_ids` - Optional input tensor of shape (*batch size*, *target_sequence_length*). This or `decoder_input_embeds` must be provided.
/// * `encoder_outputs` - Optional tuple made of a tensor of shape (*batch size*, *source_sequence_length*, *encoder_hidden_dim*) and optional vectors of tensors of length *num_encoder_layers* with shape (*batch size*, *source_sequence_length*, *hidden_size*).
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
/// * `decoder_attention_mask` - Optional attention mask of shape (*batch size*, *target_sequence_length*) for the decoder positions. Positions with a mask with value 0 will be masked.
/// * `input_embeds` - Optional input tensor of shape (*batch size*, *source_sequence_length*, *embeddings dimension*). This or `input_ids` must be provided.
/// * `decoder_input_embeds` - Optional input tensor of shape (*batch size*, *target_sequence_length*, *embeddings dimension*). This or `decoder_input_ids` must be provided.

View File

@ -421,6 +421,7 @@ impl Conversation {
/// # Arguments
/// - texts: sequence of strings, alternating between past user inputs and past generated responses.
/// - ids: sequence of sequence of ids, alternating between past user inputs and past generated responses.
///
/// These can be generated via a `ConversationModel`'s `encode_prompts`.
///
/// # Example:

View File

@ -31,12 +31,13 @@ use crate::pipelines::sentence_embeddings::{
use crate::{Config, RustBertError};
use regex::Regex;
use rust_tokenizers::Offset;
use serde::{Deserialize, Serialize};
use std::borrow::Cow;
use std::cmp::min;
use std::collections::{HashMap, HashSet};
/// # Keyword generated by a `KeywordExtractionModel`
#[derive(Debug, Clone)]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Keyword {
/// String representation of the keyword
pub text: String,

View File

@ -1,3 +1,4 @@
#[allow(clippy::doc_lazy_continuation)]
/// BSD 3-Clause License
///
/// Copyright (c) 2007-2022 The scikit-learn developers.

View File

@ -23,7 +23,7 @@
//! All resources for this model can be downloaded using the Python utility script included in this repository.
//! 1. Set-up a Python virtual environment and install dependencies (in ./requirements.txt)
//! 2. Run the conversion script python /utils/download-dependencies_bert_ner.py.
//! The dependencies will be downloaded to the user's home directory, under ~/rustbert/bert-ner
//! The dependencies will be downloaded to the user's home directory, under ~/rustbert/bert-ner
//!
//! The example below illustrate how to run the model for the default English NER model
//! ```no_run

View File

@ -7,7 +7,7 @@
//! installation is to use dynamic linking by pointing to an existing library location:
//! - Use the `load-dynamic` cargo feature for `ort`
//! - set the `ORT_DYLIB_PATH` to point to the location of downloaded onnxruntime library (`onnxruntime.dll`/`libonnxruntime.so`/`libonnxruntime.dylib`
//! depending on the operating system). These can be downloaded from the [release page](https://github.com/microsoft/onnxruntime/releases) of the onnxruntime project
//! depending on the operating system). These can be downloaded from the [release page](https://github.com/microsoft/onnxruntime/releases) of the onnxruntime project
//!
//! For troubleshooting issues when using an ONNX model, it is recommended to add the `tracing-subscriber = { version = "0.3", default-features = false, features = [ "env-filter", "fmt" ] }`
//! dependency, and use the `tracing_subscriber::fmt::init();` instruction in the `main` binary.

View File

@ -309,7 +309,7 @@ impl Config for SentenceEmbeddingsModulesConfig {}
impl SentenceEmbeddingsModulesConfig {
pub fn validate(self) -> Result<Self, RustBertError> {
match self.get(0) {
match self.first() {
Some(SentenceEmbeddingsModuleConfig {
module_type: SentenceEmbeddingsModuleType::Transformer,
..
@ -347,7 +347,7 @@ impl SentenceEmbeddingsModulesConfig {
}
pub fn transformer_module(&self) -> &SentenceEmbeddingsModuleConfig {
self.get(0).as_ref().unwrap()
self.first().as_ref().unwrap()
}
pub fn pooling_module(&self) -> &SentenceEmbeddingsModuleConfig {

View File

@ -25,8 +25,8 @@
//! Two APIs exist to build text generation models:
//! - `TextGenerationModel` is a high-level module that exposes text generation capabilities with a set of reasonable defaults
//! - the `LanguageGenerator` trait exposes lower-level text generation capabilities allowing the user to provide additional
//! generation options when building the model (via `GenerateConfig`) and at each query (via `GenerateOptions`). Please check the
//! [`generation_utils` module](../generation_utils/index.html) for more details
//! generation options when building the model (via `GenerateConfig`) and at each query (via `GenerateOptions`). Please check the
//! [`generation_utils` module](../generation_utils/index.html) for more details
//!
//!
//! Customized text generation models models can be loaded by overwriting the resources in the configuration.

View File

@ -30,12 +30,12 @@ enum ModelSize {
/// The logic for selecting the most appropriate model is as follows:
/// - If not specified, the model will be executed on a CUDA device if available, otherwise on the CPU
/// - If the model type is specified (e.g. `Marian`), a model with this architecture will be created. The compatibility of the model
/// with the source and target languages will be verified, and the builder will error if the settings provided are not supported.
/// with the source and target languages will be verified, and the builder will error if the settings provided are not supported.
/// - If the model size is specified, a model of the corresponding size class (computational budget) will be created. The compatibility of the model
/// with the source and target languages will be verified, and the builder will error if the settings provided are not supported.
/// with the source and target languages will be verified, and the builder will error if the settings provided are not supported.
/// - If no source or target languages are provided, a multilingual M2M100 model will be returned
/// - If no model type is provided, an average sized-model (Marian) will be returned if a pretrained model exists that covers the requested source/target languages provided.
/// Otherwise a M2M100 multi-lingual model will be returned.
/// Otherwise a M2M100 multi-lingual model will be returned.
///
/// The options for the builder are provided with dedicated "builder function", the call to `create_model()` creates a model
/// from the builder.

View File

@ -7,7 +7,6 @@ use rust_bert::resources::{load_weights, RemoteResource, ResourceProvider};
use rust_bert::Config;
use rust_tokenizers::tokenizer::{Gpt2Tokenizer, Tokenizer};
use rust_tokenizers::vocab::Vocab;
use std::convert::TryFrom;
use tch::{nn, Device, Kind, Tensor};
/// Equivalent Python code:
@ -107,7 +106,7 @@ fn gpt_j_correctness() -> anyhow::Result<()> {
Tensor::from_slice(
&input
.iter()
.map(|&e| i64::try_from(e != pad_token).unwrap())
.map(|&e| i64::from(e != pad_token))
.collect::<Vec<_>>(),
)
.to(device)

View File

@ -50,13 +50,20 @@ import sys
import zipfile
from pathlib import Path
from typing import Dict
import os
import numpy as np
import torch
from numpy.lib.format import write_array
from numpy.lib.npyio import zipfile_factory
# from numpy.lib.npyio import zipfile_factory
from torch import Tensor
def zipfile_factory(file, *args, **kwargs):
if not hasattr(file, 'read'):
file = os.fspath(file)
import zipfile
kwargs['allowZip64'] = True
kwargs['compresslevel'] = 4
return zipfile.ZipFile(file, *args, **kwargs)
def get_bf16_repr(input_tensor: torch.Tensor) -> np.ndarray:
"""Convert a bfloat16 tensor to an equivalent byte representation in Numpy.
@ -125,6 +132,12 @@ if __name__ == "__main__":
help="Use this flag to enable automatic download of the libtorch library.",
)
args = parser.parse_args()
logger = logging.getLogger('convert_model')
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler('convert_model.log')
fh.setLevel(logging.DEBUG)
logger.addHandler(fh)
target_folder = Path(args.source_file[0]).parent
with zipfile_factory(
@ -133,7 +146,7 @@ if __name__ == "__main__":
for source_file_or_pattern in args.source_file:
source_files = glob.glob(source_file_or_pattern)
for source_file in source_files:
logging.info(f"Processing source file {source_file}...")
logger.info(f"Processing source file {source_file}")
nps = {}
source_file = Path(source_file)
weights = torch.load(str(source_file), map_location="cpu")
@ -168,11 +181,11 @@ if __name__ == "__main__":
)
else:
nps[k] = np.ascontiguousarray(tensor)
logging.info(
logger.info(
f"converted {k} - {str(sys.getsizeof(nps[k]))} bytes"
)
else:
logging.info(f"skipped non-tensor object: {k}")
logger.info(f"skipped non-tensor object: {k}")
append_to_zipf(nps, output_zipfile)
source = str(target_folder / "model.npz")

11
utils/requirements.txt Normal file
View File

@ -0,0 +1,11 @@
filelock==3.15.3
fsspec==2024.6.0
Jinja2==3.1.4
MarkupSafe==2.1.5
mpmath==1.3.0
networkx==3.3
numpy==2.0.0
sympy==1.12.1
torch==2.3.1
typing_extensions==4.12.2
requests==2.32.0