fairseq/.gitmodules
Naman Goyal 3822db3300 adding model parallel multihead attention module (#1088)
Summary:
# Before submitting

- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
- [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/master/CONTRIBUTING.md)?
- [ ] Did you make sure to update the docs?
- [ ] Did you write any new necessary tests?

## What does this PR do?
Fixes # (issue).

## PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

## Did you have fun?
Make sure you had fun coding �
Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/1088

Reviewed By: myleott

Differential Revision: D20456534

fbshipit-source-id: e48afe41df210be26e0d5c1628c24cf7f9e81d4b
2020-03-17 17:50:51 -07:00

9 lines
342 B
Plaintext

[submodule "fairseq/models/huggingface/transformers"]
path = fairseq/models/huggingface/transformers
url = https://github.com/myleott/transformers.git
branch = fairseq
[submodule "fairseq/model_parallel/megatron"]
path = fairseq/model_parallel/megatron
url = https://github.com/ngoyal2707/Megatron-LM
branch = fairseq