Updated readme and documentation (#147)

* Addition of collapsible sections in README

* Addition of setup section in Readme

* Updated documentation

* Reverted requirements update

* Updated download script for DistilBERT
This commit is contained in:
guillaume-be 2021-05-11 22:00:51 +02:00 committed by GitHub
parent aa2b88f4f1
commit 14d41860d5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 699 additions and 125 deletions

132
README.md
View File

@ -5,10 +5,37 @@
[![Documentation](https://docs.rs/rust-bert/badge.svg)](https://docs.rs/rust-bert)
![License](https://img.shields.io/crates/l/rust_bert.svg)
Rust native Transformer-based models implementation. Port of Hugging Face's [Transformers library](https://github.com/huggingface/transformers), using the [tch-rs](https://github.com/LaurentMazare/tch-rs) crate and pre-processing from [rust-tokenizers](https://github.com/guillaume-be/rust-tokenizers). Supports multi-threaded tokenization and GPU inference.
Rust-native state-of-the-art Natural Language Processing models and pipelines. Port of Hugging Face's [Transformers library](https://github.com/huggingface/transformers), using the [tch-rs](https://github.com/LaurentMazare/tch-rs) crate and pre-processing from [rust-tokenizers](https://github.com/guillaume-be/rust-tokenizers). Supports multi-threaded tokenization and GPU inference.
This repository exposes the model base architecture, task-specific heads (see below) and [ready-to-use pipelines](#ready-to-use-pipelines). [Benchmarks](#benchmarks) are available at the end of this document.
The following models are currently implemented:
Get started with tasks including question answering, named entity recognition, translation, summarization, text generation, conversational agents and more in just a few lines of code:
```rust
let qa_model = QuestionAnsweringModel::new(Default::default())?;
let question = String::from("Where does Amy live ?");
let context = String::from("Amy lives in Amsterdam");
let answers = qa_model.predict(&[QaInput { question, context }], 1, 32);
```
Output:
```
[Answer { score: 0.9976, start: 13, end: 21, answer: "Amsterdam" }]
```
The tasks currently supported include:
- Translation
- Summarization
- Multi-turn dialogue
- Zero-shot classification
- Sentiment Analysis
- Named Entity Recognition
- Part of Speech tagging
- Question-Answering
- Language Generation.
<details>
<summary> <b>Expand to display the supported models/tasks matrix </b> </summary>
| |**Sequence classification**|**Token classification**|**Question answering**|**Text Generation**|**Summarization**|**Translation**|**Masked LM**|
:-----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:
@ -29,6 +56,40 @@ Reformer|✅| |✅|✅ | | |✅|
ProphetNet| | | |✅ |✅ | | |
Longformer|✅|✅|✅| | | |✅|
Pegasus| | | | |✅| | |
</details>
## Getting started
This library relies on the [tch](https://github.com/LaurentMazare/tch-rs) crate for bindings to the C++ Libtorch API.
The libtorch library is required can be downloaded either automatically or manually. The following provides a reference on how to set-up yoru environment
to use these bindings, please refer to the [tch](https://github.com/LaurentMazare/tch-rs) for detailed information or support.
Furthermore, this library relies on a cache folder for downloading pre-trained models.
This cache location defaults to `~/.cache/.rustbert`, but can be changed by setting the `RUSTBERT_CACHE` environment variable. Note that the language models used by this library are in the order of the 100s of MBs to GBs.
### Manual installation (recommended)
1. Download `libtorch` from https://pytorch.org/get-started/locally/. This package requires `v1.8.1`: if this version is no longer available on the "get started" page,
the file should be accessible by modifying the target link, for example `https://download.pytorch.org/libtorch/cu111/libtorch-shared-with-deps-1.8.1%2Bcu111.zip` for a Linux version with CUDA11.
2. Extract the library to a location of your choice
3. Set the following environment variables
##### Linux:
```bash
export LIBTORCH=/path/to/libtorch
export LD_LIBRARY_PATH=${LIBTORCH}/lib:$LD_LIBRARY_PATH
```
##### Windows
```powershell
$Env:LIBTORCH = "X:\path\to\libtorch"
$Env:Path += ";X:\path\to\libtorch\lib"
```
### Automatic installation
Alternatively, you can let the `build` script automatically download the `libtorch` library for you.
The CPU version of libtorch will be downloaded by default. To download a CUDA version, please set the environment variable `TORCH_CUDA_VERSION` to `cu111`.
Note that the libtorch library is large (order of several GBs for the CUDA-enabled version) and the first build may therefore take several minutes to complete.
## Ready-to-use pipelines
@ -37,8 +98,9 @@ Based on Hugging Face's pipelines, ready to use end-to-end NLP pipelines are ava
**Disclaimer**
The contributors of this repository are not responsible for any generation from the 3rd party utilization of the pretrained systems proposed herein.
<details>
<summary> <b>1. Question Answering</b> </summary>
#### 1. Question Answering
Extractive question answering from a given question and context. DistilBERT model fine-tuned on SQuAD (Stanford Question Answering Dataset)
```rust
@ -52,10 +114,13 @@ Extractive question answering from a given question and context. DistilBERT mode
Output:
```
[Answer { score: 0.9976814985275269, start: 13, end: 21, answer: "Amsterdam" }]
[Answer { score: 0.9976, start: 13, end: 21, answer: "Amsterdam" }]
```
</details>
&nbsp;
<details>
<summary> <b>2. Translation </b> </summary>
#### 2. Translation
Translation using the MarianMT architecture and pre-trained models from the Opus-MT team from Language Technology at the University of Helsinki.
Currently supported languages are :
- English <-> French
@ -86,8 +151,11 @@ Output:
```
Il s'agit d'une phrase à traduire
```
</details>
&nbsp;
<details>
<summary> <b>3. Summarization </b> </summary>
#### 3. Summarization
Abstractive summarization using a pretrained BART model.
```rust
@ -125,8 +193,11 @@ Output:
This is the first such discovery in a planet in its star's habitable zone.
The planet is not too hot and not too cold for liquid water to exist."
```
</details>
&nbsp;
<details>
<summary> <b>4. Dialogue Model </b> </summary>
#### 4. Dialogue Model
Conversation model based on Microsoft's [DialoGPT](https://github.com/microsoft/DialoGPT).
This pipeline allows the generation of single or multi-turn conversations between a human and a model.
The DialoGPT's page states that
@ -148,8 +219,11 @@ Example output:
```
"The Big Lebowski."
```
</details>
&nbsp;
<details>
<summary> <b>5. Natural Language Generation </b> </summary>
#### 5. Natural Language Generation
Generate language based on a prompt. GPT2 and GPT available as base models.
Include techniques such as beam search, top-k and nucleus sampling, temperature setting and repetition penalty.
Supports batch generation of sentences from several prompts. Sequences will be left-padded with the model's padding token if present, the unknown token otherwise.
@ -175,8 +249,11 @@ Example output:
"The cat was attacked by two stray dogs and was taken to a hospital. Two other cats were also injured in the attack and are being treated."
]
```
</details>
&nbsp;
<details>
<summary> <b>6. Zero-shot classification </b> </summary>
#### 6. Zero-shot classification
Performs zero-shot classification on input sentences with provided labels using a model fine-tuned for Natural Language Inference.
```rust
let sequence_classification_model = ZeroShotClassificationModel::new(Default::default())?;
@ -200,8 +277,11 @@ Output:
[ Label { "politics", score: 0.975 }, Label { "public health", score: 0.0818 }, Label {"economics", score: 0.852 }, Label {"sports", score: 0.001 } ],
]
```
</details>
&nbsp;
<details>
<summary> <b>7. Sentiment analysis </b> </summary>
#### 7. Sentiment analysis
Predicts the binary sentiment for a sentence. DistilBERT model fine-tuned on SST-2.
```rust
let sentiment_classifier = SentimentModel::new(Default::default())?;
@ -224,8 +304,11 @@ Output:
Sentiment { polarity: Positive, score: 0.9997248985164333 }
]
```
</details>
&nbsp;
<details>
<summary> <b>8. Named Entity Recognition </b> </summary>
#### 8. Named Entity Recognition
Extracts entities (Person, Location, Organization, Miscellaneous) from text. BERT cased large model fine-tuned on CoNNL03, contributed by the [MDZ Digital Library team at the Bavarian State Library](https://github.com/dbmdz).
Models are currently available for English, German, Spanish and Dutch.
```rust
@ -251,13 +334,16 @@ Output:
]
]
```
</details>
&nbsp;
<details>
<summary> <b>9. Part of Speech tagging </b> </summary>
#### 9. Part of Speech tagging
Extracts Part of Speech tags (Noun, Verb, Adjective...) from text.
```rust
let pos_model = POSModel::new(default::default())?;
let input = ["My name is Bob];
let input = ["My name is Bob"];
let output = pos_model.predict(&input);
```
@ -270,6 +356,7 @@ Output:
Entity { word: "Bob", score: 0.7460, label: "NNP" }
]
```
</details>
## Benchmarks
@ -277,29 +364,18 @@ For simple pipelines (sequence classification, tokens classification, question a
For text generation tasks (summarization, translation, conversation, free text generation), significant benefits can be expected (up to 2 to 4 times faster processing depending on the input and application). The article [Accelerating text generation with Rust](https://guillaume-be.github.io/2020-11-21/generation_benchmarks) focuses on these text generation applications and provides more details on the performance comparison to Python.
## Base models
## Loading pretrained and custom model weights
The base model and task-specific heads are also available for users looking to expose their own transformer based models.
Examples on how to prepare the date using a native tokenizers Rust library are available in `./examples` for BERT, DistilBERT, RoBERTa, GPT, GPT2 and BART.
Note that when importing models from Pytorch, the convention for parameters naming needs to be aligned with the Rust schema. Loading of the pre-trained weights will fail if any of the model parameters weights cannot be found in the weight files.
If this quality check is to be skipped, an alternative method `load_partial` can be invoked from the variables store.
## Setup
Pretrained models are available on Hugging face's [model hub](https://huggingface.co/models?filter=rust) and can be loaded using `RemoteResources` defined in this library.
A conversion utility script is included in `./utils` to convert Pytorch weights to a set of weights compatible with this library. This script requires Python and `torch` to be set-up, and can be used as follows:
`python ./utils/convert_model.py path/to/pytorch_model.bin` where `path/to/pytorch_model.bin` is the location of the original Pytorch weights.
A number of pretrained model configuration, weights and vocabulary are downloaded directly from [Hugging Face's model repository](https://huggingface.co/models).
The list of models available with Rust-compatible weights is available at [https://huggingface.co/models?filter=rust](https://huggingface.co/models?filter=rust).
The models will be downloaded to the environment variable `RUSTBERT_CACHE` if it exists, otherwise to `~/.cache/.rustbert`.
Additional models can be added if of interest, please raise an issue.
In order to load custom weights to the library, these need to be converter to a binary format that can be read by Libtorch (the original `.bin` files are pickles and cannot be used directly).
Several Python scripts to load Pytorch weights and convert them to the appropriate format are provided and can be adapted based on the model needs.
1. Compile the package: `cargo build`
2. Download the model files & perform necessary conversions
- Set-up a virtual environment and install dependencies
- Download the Pytorch model of interest (`pytorch_model.bin` from [Hugging Face's model repository](https://huggingface.co/models))
- run the conversion script `python /utils/convert_model.py <PATH_TO_PYTORCH_WEIGHTS>`.
## Citation
If you use `rust-bert` for your work, please cite [End-to-end NLP Pipelines in Rust](https://www.aclweb.org/anthology/2020.nlposs-1.4/):

View File

@ -1,2 +1,2 @@
torch == 1.5.0
transformers == 2.10.0
torch == 1.8.1
transformers == 4.2.2

View File

@ -178,10 +178,10 @@ impl GptNeoModel {
/// # Example
///
/// ```no_run
/// use rust_bert::gpt_neo::{GptNeoConfig, GptNeoModel};
/// use rust_bert::Config;
/// use std::path::Path;
/// use tch::{nn, Device};
/// use rust_bert::gpt_neo::{GptNeoConfig, GptNeoModel};
///
/// let config_path = Path::new("path/to/config.json");
/// let device = Device::Cpu;
@ -291,7 +291,7 @@ impl GptNeoModel {
/// None,
/// None,
/// None,
/// false
/// false,
/// )
/// });
/// ```
@ -464,10 +464,10 @@ impl GptNeoForCausalLM {
/// # Example
///
/// ```no_run
/// use rust_bert::gpt_neo::{GptNeoConfig, GptNeoForCausalLM};
/// use rust_bert::Config;
/// use std::path::Path;
/// use tch::{nn, Device};
/// use rust_bert::gpt_neo::{GptNeoConfig, GptNeoForCausalLM};
///
/// let config_path = Path::new("path/to/config.json");
/// let device = Device::Cpu;
@ -532,7 +532,7 @@ impl GptNeoForCausalLM {
/// None,
/// None,
/// None,
/// false
/// false,
/// )
/// });
/// ```

View File

@ -16,10 +16,12 @@
//! - 2.7B parameters model (GptNeoModelResources::GPT_NEO_2_7B)
//!
//! ```no_run
//! use rust_bert::gpt_neo::{
//! GptNeoConfigResources, GptNeoMergesResources, GptNeoModelResources, GptNeoVocabResources,
//! };
//! use rust_bert::pipelines::common::ModelType;
//! use rust_bert::pipelines::text_generation::{TextGenerationConfig, TextGenerationModel};
//! use rust_bert::resources::{RemoteResource, Resource};
//! use rust_bert::gpt_neo::{GptNeoConfigResources, GptNeoVocabResources, GptNeoModelResources, GptNeoMergesResources};
//! use rust_bert::pipelines::text_generation::{TextGenerationModel, TextGenerationConfig};
//! use tch::Device;
//!
//! fn main() -> anyhow::Result<()> {

View File

@ -1,23 +1,9 @@
//! # Ready-to-use NLP pipelines and Transformer-based models
//!
//! Rust native Transformer-based models implementation. Port of the [Transformers](https://github.com/huggingface/transformers) library, using the tch-rs crate and pre-processing from rust-tokenizers.
//! Supports multithreaded tokenization and GPU inference. This repository exposes the model base architecture, task-specific heads (see below) and ready-to-use pipelines.
//! Rust-native state-of-the-art Natural Language Processing models and pipelines. Port of Hugging Face's [Transformers library](https://github.com/huggingface/transformers), using the [tch-rs](https://github.com/LaurentMazare/tch-rs) crate and pre-processing from [rust-tokenizers](https://github.com/guillaume-be/rust-tokenizers). Supports multi-threaded tokenization and GPU inference.
//! This repository exposes the model base architecture, task-specific heads (see below) and [ready-to-use pipelines](#ready-to-use-pipelines). [Benchmarks](#benchmarks) are available at the end of this document.
//!
//! # Quick Start
//!
//! This crate can be used in two different ways:
//! - Ready-to-use NLP pipelines for:
//! - Translation
//! - Summarization
//! - Multi-turn dialogue
//! - Zero-shot classification
//! - Sentiment Analysis
//! - Named Entity Recognition
//! - Part of Speech tagging
//! - Question-Answering
//! - Language Generation.
//!
//! More information on these can be found in the [`pipelines` module](./pipelines/index.html)
//! Get started with tasks including question answering, named entity recognition, translation, summarization, text generation, conversational agents and more in just a few lines of code:
//! ```no_run
//! use rust_bert::pipelines::question_answering::{QaInput, QuestionAnsweringModel};
//!
@ -30,8 +16,37 @@
//! # Ok(())
//! # }
//! ```
//!
//! Output:
//! ```no_run
//! # use rust_bert::pipelines::question_answering::Answer;
//! # let output =
//! [Answer {
//! score: 0.9976,
//! start: 13,
//! end: 21,
//! answer: String::from("Amsterdam"),
//! }]
//! # ;
//! ```
//!
//! The tasks currently supported include:
//! - Translation
//! - Summarization
//! - Multi-turn dialogue
//! - Zero-shot classification
//! - Sentiment Analysis
//! - Named Entity Recognition
//! - Part of Speech tagging
//! - Question-Answering
//! - Language Generation.
//!
//! More information on these can be found in the [`pipelines` module](./pipelines/index.html)
//! - Transformer models base architectures with customized heads. These allow to load pre-trained models for customized inference in Rust
//!
//! <details>
//! <summary> <b> Click to expand to display the supported models/tasks matrix </b> </summary>
//!
//! | |**Sequence classification**|**Token classification**|**Question answering**|**Text Generation**|**Summarization**|**Translation**|**Masked LM**|
//! :-----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:
//! DistilBERT|✅|✅|✅| | | |✅|
@ -51,21 +66,508 @@
//! ProphetNet| | | |✅ |✅ | | |
//! Longformer|✅|✅|✅| | | |✅|
//! Pegasus| | | | |✅| | |
//! </details>
//!
//! # Loading pre-trained models
//! # Getting started
//!
//! A number of pretrained model configuration, weights and vocabulary are downloaded directly from [Huggingface's model repository](https://huggingface.co/models).
//! The list of models available with Rust-compatible weights is available in the example ./examples/download_all_dependencies.rs. Additional models can be added if of interest, please raise an issue.
//! This library relies on the [tch](https://github.com/LaurentMazare/tch-rs) crate for bindings to the C++ Libtorch API.
//! The libtorch library is required can be downloaded either automatically or manually. The following provides a reference on how to set-up yoru environment
//! to use these bindings, please refer to the [tch](https://github.com/LaurentMazare/tch-rs) for detailed information or support.
//!
//! In order to load custom weights to the library, these need to be converter to a binary format that can be read by Libtorch (the original `.bin` files are pickles and cannot be used directly).
//! Several Python scripts to load Pytorch weights and convert them to the appropriate format are provided and can be adapted based on the model needs.
//! Furthermore, this library relies on a cache folder for downloading pre-trained models.
//! This cache location defaults to `~/.cache/.rustbert`, but can be changed by setting the `RUSTBERT_CACHE` environment variable. Note that the language models used by this library are in the order of the 100s of MBs to GBs.
//!
//! The procedure for building custom weights or re-building pretrained weights is as follows:
//! 1. Compile the package: cargo build --release
//! 2. Download the model files & perform necessary conversions
//! - Set-up a virtual environment and install dependencies
//! - run the conversion script python /utils/download-dependencies_{MODEL_TO_DOWNLOAD}.py. The dependencies will be downloaded to the user's home directory, under ~/rustbert/{}
//! 3. Run the example cargo run --release
//! ### Manual installation (recommended)
//!
//! 1. Download `libtorch` from https://pytorch.org/get-started/locally/. This package requires `v1.8.1`: if this version is no longer available on the "get started" page,
//! the file should be accessible by modifying the target link, for example `https://download.pytorch.org/libtorch/cu111/libtorch-shared-with-deps-1.8.1%2Bcu111.zip` for a Linux version with CUDA11.
//! 2. Extract the library to a location of your choice
//! 3. Set the following environment variables
//! ##### Linux:
//! ```bash
//! export LIBTORCH=/path/to/libtorch
//! export LD_LIBRARY_PATH=${LIBTORCH}/lib:$LD_LIBRARY_PATH
//! ```
//!
//! ##### Windows
//! ```powershell
//! $Env:LIBTORCH = "X:\path\to\libtorch"
//! $Env:Path += ";X:\path\to\libtorch\lib"
//! ```
//!
//! ### Automatic installation
//!
//! Alternatively, you can let the `build` script automatically download the `libtorch` library for you.
//! The CPU version of libtorch will be downloaded by default. To download a CUDA version, please set the environment variable `TORCH_CUDA_VERSION` to `cu111`.
//! Note that the libtorch library is large (order of several GBs for the CUDA-enabled version) and the first build may therefore take several minutes to complete.
//!
//! # Ready-to-use pipelines
//!
//! Based on Hugging Face's pipelines, ready to use end-to-end NLP pipelines are available as part of this crate. More information on these can be found in the [`pipelines` module](./pipelines/index.html)
//! The following capabilities are currently available:
//!
//! **Disclaimer**
//! The contributors of this repository are not responsible for any generation from the 3rd party utilization of the pretrained systems proposed herein.
//!
//! <details>
//! <summary> <b>1. Question Answering</b> </summary>
//!
//! Extractive question answering from a given question and context. DistilBERT model fine-tuned on SQuAD (Stanford Question Answering Dataset)
//!
//! ```no_run
//! use rust_bert::pipelines::question_answering::{QaInput, QuestionAnsweringModel};
//! # fn main() -> anyhow::Result<()> {
//! let qa_model = QuestionAnsweringModel::new(Default::default())?;
//!
//! let question = String::from("Where does Amy live ?");
//! let context = String::from("Amy lives in Amsterdam");
//!
//! let answers = qa_model.predict(&[QaInput { question, context }], 1, 32);
//! # Ok(())
//! # }
//! ```
//!
//! Output: \
//! ```no_run
//! # use rust_bert::pipelines::question_answering::Answer;
//! # let output =
//! [Answer {
//! score: 0.9976,
//! start: 13,
//! end: 21,
//! answer: String::from("Amsterdam"),
//! }]
//! # ;
//! ```
//!
//! </details>
//! &nbsp;
//! <details>
//! <summary> <b>2. Translation </b> </summary>
//!
//! Translation using the MarianMT architecture and pre-trained models from the Opus-MT team from Language Technology at the University of Helsinki.
//! Currently supported languages are :
//! - English <-> French
//! - English <-> Spanish
//! - English <-> Portuguese
//! - English <-> Italian
//! - English <-> Catalan
//! - English <-> German
//! - English <-> Russian
//! - English <-> Chinese (Simplified)
//! - English <-> Chinese (Traditional)
//! - English <-> Dutch
//! - English <-> Swedish
//! - English <-> Arabic
//! - English <-> Hebrew
//! - English <-> Hindi
//! - French <-> German
//!
//! ```no_run
//! # fn main() -> anyhow::Result<()> {
//! # use rust_bert::pipelines::generation_utils::LanguageGenerator;
//! use rust_bert::pipelines::translation::{Language, TranslationConfig, TranslationModel};
//! use tch::Device;
//! let translation_config =
//! TranslationConfig::new(Language::EnglishToFrench, Device::cuda_if_available());
//! let mut model = TranslationModel::new(translation_config)?;
//!
//! let input = ["This is a sentence to be translated"];
//!
//! let output = model.translate(&input);
//! # Ok(())
//! # }
//! ```
//!
//! Output: \
//! ```no_run
//! # let output =
//! "Il s'agit d'une phrase à traduire"
//! # ;
//! ```
//!
//! </details>
//! &nbsp;
//! <details>
//! <summary> <b>3. Summarization </b> </summary>
//!
//! Abstractive summarization using a pretrained BART model.
//!
//! ```no_run
//! # fn main() -> anyhow::Result<()> {
//! # use rust_bert::pipelines::generation_utils::LanguageGenerator;
//! use rust_bert::pipelines::summarization::SummarizationModel;
//!
//! let mut model = SummarizationModel::new(Default::default())?;
//!
//! let input = ["In findings published Tuesday in Cornell University's arXiv by a team of scientists
//! from the University of Montreal and a separate report published Wednesday in Nature Astronomy by a team
//! from University College London (UCL), the presence of water vapour was confirmed in the atmosphere of K2-18b,
//! a planet circling a star in the constellation Leo. This is the first such discovery in a planet in its star's
//! habitable zone — not too hot and not too cold for liquid water to exist. The Montreal team, led by Björn Benneke,
//! used data from the NASA's Hubble telescope to assess changes in the light coming from K2-18b's star as the planet
//! passed between it and Earth. They found that certain wavelengths of light, which are usually absorbed by water,
//! weakened when the planet was in the way, indicating not only does K2-18b have an atmosphere, but the atmosphere
//! contains water in vapour form. The team from UCL then analyzed the Montreal team's data using their own software
//! and confirmed their conclusion. This was not the first time scientists have found signs of water on an exoplanet,
//! but previous discoveries were made on planets with high temperatures or other pronounced differences from Earth.
//! \"This is the first potentially habitable planet where the temperature is right and where we now know there is water,\"
//! said UCL astronomer Angelos Tsiaras. \"It's the best candidate for habitability right now.\" \"It's a good sign\",
//! said Ryan Cloutier of the HarvardSmithsonian Center for Astrophysics, who was not one of either study's authors.
//! \"Overall,\" he continued, \"the presence of water in its atmosphere certainly improves the prospect of K2-18b being
//! a potentially habitable planet, but further observations will be required to say for sure. \"
//! K2-18b was first identified in 2015 by the Kepler space telescope. It is about 110 light-years from Earth and larger
//! but less dense. Its star, a red dwarf, is cooler than the Sun, but the planet's orbit is much closer, such that a year
//! on K2-18b lasts 33 Earth days. According to The Guardian, astronomers were optimistic that NASA's James Webb space
//! telescope — scheduled for launch in 2021 — and the European Space Agency's 2028 ARIEL program, could reveal more
//! about exoplanets like K2-18b."];
//!
//! let output = model.summarize(&input);
//! # Ok(())
//! # }
//! ```
//! (example from: [WikiNews](https://en.wikinews.org/wiki/Astronomers_find_water_vapour_in_atmosphere_of_exoplanet_K2-18b))
//!
//! Example output: \
//! ```no_run
//! # let output =
//! "Scientists have found water vapour on K2-18b, a planet 110 light-years from Earth.
//! This is the first such discovery in a planet in its star's habitable zone.
//! The planet is not too hot and not too cold for liquid water to exist."
//! # ;
//! ```
//!
//! </details>
//! &nbsp;
//! <details>
//! <summary> <b>4. Dialogue Model </b> </summary>
//!
//! Conversation model based on Microsoft's [DialoGPT](https://github.com/microsoft/DialoGPT).
//! This pipeline allows the generation of single or multi-turn conversations between a human and a model.
//! The DialoGPT's page states that
//! > The human evaluation results indicate that the response generated from DialoGPT is comparable to human response quality
//! > under a single-turn conversation Turing test. ([DialoGPT repository](https://github.com/microsoft/DialoGPT))
//!
//! The model uses a `ConversationManager` to keep track of active conversations and generate responses to them.
//!
//! ```no_run
//! # fn main() -> anyhow::Result<()> {
//! use rust_bert::pipelines::conversation::{ConversationManager, ConversationModel};
//! let conversation_model = ConversationModel::new(Default::default())?;
//! let mut conversation_manager = ConversationManager::new();
//!
//! let conversation_id =
//! conversation_manager.create("Going to the movies tonight - any suggestions?");
//! let output = conversation_model.generate_responses(&mut conversation_manager);
//! # Ok(())
//! # }
//! ```
//! Example output: \
//! ```no_run
//! # let output =
//! "The Big Lebowski."
//! # ;
//! ```
//!
//! </details>
//! &nbsp;
//! <details>
//! <summary> <b>5. Natural Language Generation </b> </summary>
//!
//! Generate language based on a prompt. GPT2 and GPT available as base models.
//! Include techniques such as beam search, top-k and nucleus sampling, temperature setting and repetition penalty.
//! Supports batch generation of sentences from several prompts. Sequences will be left-padded with the model's padding token if present, the unknown token otherwise.
//! This may impact the results, it is recommended to submit prompts of similar length for best results
//!
//! ```no_run
//! # fn main() -> anyhow::Result<()> {
//! use rust_bert::pipelines::text_generation::TextGenerationModel;
//! use rust_bert::pipelines::common::ModelType;
//! let mut model = TextGenerationModel::new(Default::default())?;
//! let input_context_1 = "The dog";
//! let input_context_2 = "The cat was";
//!
//! let prefix = None; // Optional prefix to append prompts with, will be excluded from the generated output
//!
//! let output = model.generate(&[input_context_1, input_context_2], prefix);
//! # Ok(())
//! # }
//! ```
//! Example output: \
//! ```no_run
//! # let output =
//! [
//! "The dog's owners, however, did not want to be named. According to the lawsuit, the animal's owner, a 29-year",
//! "The dog has always been part of the family. \"He was always going to be my dog and he was always looking out for me",
//! "The dog has been able to stay in the home for more than three months now. \"It's a very good dog. She's",
//! "The cat was discovered earlier this month in the home of a relative of the deceased. The cat\'s owner, who wished to remain anonymous,",
//! "The cat was pulled from the street by two-year-old Jazmine.\"I didn't know what to do,\" she said",
//! "The cat was attacked by two stray dogs and was taken to a hospital. Two other cats were also injured in the attack and are being treated."
//! ]
//! # ;
//! ```
//!
//! </details>
//! &nbsp;
//! <details>
//! <summary> <b>6. Zero-shot classification </b> </summary>
//!
//! Performs zero-shot classification on input sentences with provided labels using a model fine-tuned for Natural Language Inference.
//! ```no_run
//! # use rust_bert::pipelines::zero_shot_classification::ZeroShotClassificationModel;
//! # fn main() -> anyhow::Result<()> {
//! let sequence_classification_model = ZeroShotClassificationModel::new(Default::default())?;
//! let input_sentence = "Who are you voting for in 2020?";
//! let input_sequence_2 = "The prime minister has announced a stimulus package which was widely criticized by the opposition.";
//! let candidate_labels = &["politics", "public health", "economics", "sports"];
//! let output = sequence_classification_model.predict_multilabel(
//! &[input_sentence, input_sequence_2],
//! candidate_labels,
//! None,
//! 128,
//! );
//! # Ok(())
//! # }
//! ```
//!
//! outputs:
//! ```no_run
//! # use rust_bert::pipelines::sequence_classification::Label;
//! let output = [
//! [
//! Label {
//! text: "politics".to_string(),
//! score: 0.972,
//! id: 0,
//! sentence: 0,
//! },
//! Label {
//! text: "public health".to_string(),
//! score: 0.032,
//! id: 1,
//! sentence: 0,
//! },
//! Label {
//! text: "economics".to_string(),
//! score: 0.006,
//! id: 2,
//! sentence: 0,
//! },
//! Label {
//! text: "sports".to_string(),
//! score: 0.004,
//! id: 3,
//! sentence: 0,
//! },
//! ],
//! [
//! Label {
//! text: "politics".to_string(),
//! score: 0.975,
//! id: 0,
//! sentence: 1,
//! },
//! Label {
//! text: "economics".to_string(),
//! score: 0.852,
//! id: 2,
//! sentence: 1,
//! },
//! Label {
//! text: "public health".to_string(),
//! score: 0.0818,
//! id: 1,
//! sentence: 1,
//! },
//! Label {
//! text: "sports".to_string(),
//! score: 0.001,
//! id: 3,
//! sentence: 1,
//! },
//! ],
//! ]
//! .to_vec();
//! ```
//!
//! </details>
//! &nbsp;
//! <details>
//! <summary> <b>7. Sentiment analysis </b> </summary>
//!
//! Predicts the binary sentiment for a sentence. DistilBERT model fine-tuned on SST-2.
//! ```no_run
//! use rust_bert::pipelines::sentiment::SentimentModel;
//! # fn main() -> anyhow::Result<()> {
//! let sentiment_model = SentimentModel::new(Default::default())?;
//! let input = [
//! "Probably my all-time favorite movie, a story of selflessness, sacrifice and dedication to a noble cause, but it's not preachy or boring.",
//! "This film tried to be too many things all at once: stinging political satire, Hollywood blockbuster, sappy romantic comedy, family values promo...",
//! "If you like original gut wrenching laughter you will like this movie. If you are young or old then you will love this movie, hell even my mom liked it.",
//! ];
//! let output = sentiment_model.predict(&input);
//! # Ok(())
//! # }
//! ```
//! (Example courtesy of [IMDb](http://www.imdb.com))
//!
//! Output: \
//! ```no_run
//! # use rust_bert::pipelines::sentiment::Sentiment;
//! # use rust_bert::pipelines::sentiment::SentimentPolarity::{Positive, Negative};
//! # let output =
//! [
//! Sentiment {
//! polarity: Positive,
//! score: 0.998,
//! },
//! Sentiment {
//! polarity: Negative,
//! score: 0.992,
//! },
//! Sentiment {
//! polarity: Positive,
//! score: 0.999,
//! },
//! ]
//! # ;
//! ```
//!
//! </details>
//! &nbsp;
//! <details>
//! <summary> <b>8. Named Entity Recognition </b> </summary>
//!
//! Extracts entities (Person, Location, Organization, Miscellaneous) from text. BERT cased large model fine-tuned on CoNNL03, contributed by the [MDZ Digital Library team at the Bavarian State Library](https://github.com/dbmdz).
//! Models are currently available for English, German, Spanish and Dutch.
//! ```no_run
//! use rust_bert::pipelines::ner::NERModel;
//! # fn main() -> anyhow::Result<()> {
//! let ner_model = NERModel::new(Default::default())?;
//! let input = [
//! "My name is Amy. I live in Paris.",
//! "Paris is a city in France.",
//! ];
//! let output = ner_model.predict(&input);
//! # Ok(())
//! # }
//! ```
//! Output: \
//! ```no_run
//! # use rust_bert::pipelines::ner::Entity;
//! # let output =
//! [
//! [
//! Entity {
//! word: String::from("Amy"),
//! score: 0.9986,
//! label: String::from("I-PER"),
//! },
//! Entity {
//! word: String::from("Paris"),
//! score: 0.9985,
//! label: String::from("I-LOC"),
//! },
//! ],
//! [
//! Entity {
//! word: String::from("Paris"),
//! score: 0.9988,
//! label: String::from("I-LOC"),
//! },
//! Entity {
//! word: String::from("France"),
//! score: 0.9993,
//! label: String::from("I-LOC"),
//! },
//! ],
//! ]
//! # ;
//! ```
//!
//! </details>
//! &nbsp;
//! <details>
//! <summary> <b>9. Part of Speech tagging </b> </summary>
//!
//! Extracts Part of Speech tags (Noun, Verb, Adjective...) from text.
//! ```no_run
//! use rust_bert::pipelines::pos_tagging::POSModel;
//! # fn main() -> anyhow::Result<()> {
//! let pos_model = POSModel::new(Default::default())?;
//! let input = ["My name is Bob"];
//! let output = pos_model.predict(&input);
//! # Ok(())
//! # }
//! ```
//! Output: \
//! ```no_run
//! # use rust_bert::pipelines::pos_tagging::POSTag;
//! # let output =
//! [
//! POSTag {
//! word: String::from("My"),
//! score: 0.1560,
//! label: String::from("PRP"),
//! },
//! POSTag {
//! word: String::from("name"),
//! score: 0.6565,
//! label: String::from("NN"),
//! },
//! POSTag {
//! word: String::from("is"),
//! score: 0.3697,
//! label: String::from("VBZ"),
//! },
//! POSTag {
//! word: String::from("Bob"),
//! score: 0.7460,
//! label: String::from("NNP"),
//! },
//! ]
//! # ;
//! ```
//!
//! </details>
//!
//! ## Benchmarks
//!
//! For simple pipelines (sequence classification, tokens classification, question answering) the performance between Python and Rust is expected to be comparable. This is because the most expensive part of these pipeline is the language model itself, sharing a common implementation in the Torch backend. The [End-to-end NLP Pipelines in Rust](https://www.aclweb.org/anthology/2020.nlposs-1.4/) provides a benchmarks section covering all pipelines.
//!
//! For text generation tasks (summarization, translation, conversation, free text generation), significant benefits can be expected (up to 2 to 4 times faster processing depending on the input and application). The article [Accelerating text generation with Rust](https://guillaume-be.github.io/2020-11-21/generation_benchmarks) focuses on these text generation applications and provides more details on the performance comparison to Python.
//!
//! ## Loading pretrained and custom model weights
//!
//! The base model and task-specific heads are also available for users looking to expose their own transformer based models.
//! Examples on how to prepare the date using a native tokenizers Rust library are available in `./examples` for BERT, DistilBERT, RoBERTa, GPT, GPT2 and BART.
//! Note that when importing models from Pytorch, the convention for parameters naming needs to be aligned with the Rust schema. Loading of the pre-trained weights will fail if any of the model parameters weights cannot be found in the weight files.
//! If this quality check is to be skipped, an alternative method `load_partial` can be invoked from the variables store.
//!
//! Pretrained models are available on Hugging face's [model hub](https://huggingface.co/models?filter=rust) and can be loaded using `RemoteResources` defined in this library.
//! A conversion utility script is included in `./utils` to convert Pytorch weights to a set of weights compatible with this library. This script requires Python and `torch` to be set-up, and can be used as follows:
//! `python ./utils/convert_model.py path/to/pytorch_model.bin` where `path/to/pytorch_model.bin` is the location of the original Pytorch weights.
//!
//!
//! ## Citation
//!
//! If you use `rust-bert` for your work, please cite [End-to-end NLP Pipelines in Rust](https://www.aclweb.org/anthology/2020.nlposs-1.4/):
//! ```bibtex
//! @inproceedings{becquin-2020-end,
//! title = "End-to-end {NLP} Pipelines in Rust",
//! author = "Becquin, Guillaume",
//! booktitle = "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)",
//! year = "2020",
//! publisher = "Association for Computational Linguistics",
//! url = "https://www.aclweb.org/anthology/2020.nlposs-1.4",
//! pages = "20--25",
//! }
//! ```
//!
//! ## Acknowledgements
//!
//! Thank you to [Hugging Face](https://huggingface.co) for hosting a set of weights compatible with this Rust library.
//! The list of ready-to-use pretrained models is listed at [https://huggingface.co/models?filter=rust](https://huggingface.co/models?filter=rust).
pub mod albert;
pub mod bart;

View File

@ -38,10 +38,8 @@
//!
//! let device = Device::cuda_if_available();
//! let mut vs = nn::VarStore::new(device);
//! let tokenizer: PegasusTokenizer = PegasusTokenizer::from_file(
//! vocab_path.to_str().unwrap(),
//! false,
//! )?;
//! let tokenizer: PegasusTokenizer =
//! PegasusTokenizer::from_file(vocab_path.to_str().unwrap(), false)?;
//! let config = PegasusConfig::from_file(config_path);
//! let pegasus_model = PegasusModel::new(&vs.root(), &config);
//! vs.load(weights_path)?;

View File

@ -325,30 +325,30 @@
//! # use rust_bert::pipelines::ner::Entity;
//! # let output =
//! [
//! [
//! Entity {
//! word: String::from("Amy"),
//! score: 0.9986,
//! label: String::from("I-PER"),
//! },
//! Entity {
//! word: String::from("Paris"),
//! score: 0.9985,
//! label: String::from("I-LOC"),
//! },
//! ],
//! [
//! Entity {
//! word: String::from("Paris"),
//! score: 0.9988,
//! label: String::from("I-LOC"),
//! },
//! Entity {
//! word: String::from("France"),
//! score: 0.9993,
//! label: String::from("I-LOC"),
//! },
//! ]
//! [
//! Entity {
//! word: String::from("Amy"),
//! score: 0.9986,
//! label: String::from("I-PER"),
//! },
//! Entity {
//! word: String::from("Paris"),
//! score: 0.9985,
//! label: String::from("I-LOC"),
//! },
//! ],
//! [
//! Entity {
//! word: String::from("Paris"),
//! score: 0.9988,
//! label: String::from("I-LOC"),
//! },
//! Entity {
//! word: String::from("France"),
//! score: 0.9993,
//! label: String::from("I-LOC"),
//! },
//! ],
//! ]
//! # ;
//! ```
@ -359,9 +359,7 @@
//! use rust_bert::pipelines::pos_tagging::POSModel;
//! # fn main() -> anyhow::Result<()> {
//! let pos_model = POSModel::new(Default::default())?;
//! let input = [
//! "My name is Bob",
//! ];
//! let input = ["My name is Bob"];
//! let output = pos_model.predict(&input);
//! # Ok(())
//! # }

View File

@ -43,30 +43,30 @@
//! # use rust_bert::pipelines::ner::Entity;
//! # let output =
//! [
//! [
//! Entity {
//! word: String::from("Amy"),
//! score: 0.9986,
//! label: String::from("I-PER"),
//! },
//! Entity {
//! word: String::from("Paris"),
//! score: 0.9985,
//! label: String::from("I-LOC"),
//! }
//! ],
//! [
//! Entity {
//! word: String::from("Paris"),
//! score: 0.9988,
//! label: String::from("I-LOC"),
//! },
//! Entity {
//! word: String::from("France"),
//! score: 0.9993,
//! label: String::from("I-LOC"),
//! },
//! ]
//! [
//! Entity {
//! word: String::from("Amy"),
//! score: 0.9986,
//! label: String::from("I-PER"),
//! },
//! Entity {
//! word: String::from("Paris"),
//! score: 0.9985,
//! label: String::from("I-LOC"),
//! },
//! ],
//! [
//! Entity {
//! word: String::from("Paris"),
//! score: 0.9988,
//! label: String::from("I-LOC"),
//! },
//! Entity {
//! word: String::from("France"),
//! score: 0.9993,
//! label: String::from("I-LOC"),
//! },
//! ],
//! ]
//! # ;
//! ```

View File

@ -29,8 +29,7 @@
//! ```no_run
//! # use rust_bert::pipelines::pos_tagging::POSTag;
//! # let output =
//! [
//! [
//! [[
//! POSTag {
//! word: String::from("My"),
//! score: 0.2465,
@ -76,8 +75,7 @@
//! score: 1.0,
//! label: String::from("."),
//! },
//! ],
//! ]
//! ]]
//! # ;
//! ```
//!

View File

@ -1,5 +1,5 @@
from transformers import DISTILBERT_PRETRAINED_CONFIG_ARCHIVE_MAP
from transformers.tokenization_distilbert import PRETRAINED_VOCAB_FILES_MAP
from transformers.models.distilbert.tokenization_distilbert import PRETRAINED_VOCAB_FILES_MAP
from transformers.file_utils import get_from_cache, hf_bucket_url
from pathlib import Path
import shutil
@ -16,7 +16,7 @@ target_path = Path.home() / 'rustbert' / 'distilbert'
temp_config = get_from_cache(config_path)
temp_vocab = get_from_cache(vocab_path)
temp_weights = get_from_cache(hf_bucket_url(weights_path, filename="pytorch_model.bin", use_cdn=True))
temp_weights = get_from_cache(hf_bucket_url(weights_path, filename="pytorch_model.bin"))
os.makedirs(str(target_path), exist_ok=True)