1
1
mirror of https://github.com/leon-ai/leon.git synced 2024-11-23 20:12:08 +03:00

feat: complete Prettier setup

This commit is contained in:
louistiti 2022-09-03 19:12:41 +08:00
parent ba9f3f22f6
commit 019c462bfa
No known key found for this signature in database
GPG Key ID: 7ECA3DD523793FE6
138 changed files with 3967 additions and 3269 deletions

View File

@ -1,23 +1,29 @@
{
"presets": [
["@babel/preset-env", {
"targets": {
"node": "current"
}
}]
],
"plugins": [
["module-resolver", {
"alias": {
"@@": ".",
"@": "./server/src"
},
"compilerOptions": {
"paths": {
"@@": ".",
"@": "./server/src"
[
"@babel/preset-env",
{
"targets": {
"node": "current"
}
}
}]
]
],
"plugins": [
[
"module-resolver",
{
"alias": {
"@@": ".",
"@": "./server/src"
},
"compilerOptions": {
"paths": {
"@@": ".",
"@": "./server/src"
}
}
}
]
]
}

View File

@ -1,7 +1,8 @@
{
"extends": [
"eslint:recommended",
"plugin:@typescript-eslint/recommended"
"plugin:@typescript-eslint/recommended",
"prettier"
],
"parser": "@typescript-eslint/parser",
"env": {
@ -14,15 +15,14 @@
"globals": {
"io": true
},
"plugins": [
"@typescript-eslint"
],
"plugins": ["@typescript-eslint"],
"ignorePatterns": "*.spec.js",
"rules": {
"no-async-promise-executor": ["off"],
"no-underscore-dangle": ["error", { "allowAfterThis": true }],
"prefer-destructuring": ["error"],
"comma-dangle": ["error", "never"],
"semi": ["error", "never"]
"semi": ["error", "never"],
"object-curly-spacing": ["error", "always"]
}
}

View File

@ -14,21 +14,21 @@ appearance, race, religion, or sexual identity and orientation.
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
- The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities

View File

@ -19,10 +19,9 @@ Here are few examples about how you could help on Leon, by:
- [Improving the documentation](https://github.com/leon-ai/docs.getleon.ai) (translations, typos, better writing, etc.).
- [Sponsoring Leon](http://sponsor.getleon.ai).
## Pull Requests
**Working on your first Pull Request?** You can learn how from this *free* series [How to Contribute to an Open Source Project on GitHub](https://egghead.io/courses/how-to-contribute-to-an-open-source-project-on-github).
**Working on your first Pull Request?** You can learn how from this _free_ series [How to Contribute to an Open Source Project on GitHub](https://egghead.io/courses/how-to-contribute-to-an-open-source-project-on-github).
- **Please first discuss** the change you wish to make via [issue](https://github.com/leon-ai/leon/issues),
email, or any other method with the owners of this repository before making a change.
@ -32,18 +31,18 @@ Here are few examples about how you could help on Leon, by:
against the `master` branch**.
- Ensure your code **respect our coding standards** (cf. [.eslintrc.json](https://github.com/leon-ai/leon/blob/develop/.eslintrc.json)).
To do so, you can run:
To do so, you can run:
```sh
npm run lint
```
- Make sure your **code passes the tests**. You can run the tests via the following command:
```sh
npm test
```
If you're adding new features to Leon, please include tests.
## Development Setup
@ -106,19 +105,19 @@ The commit message guideline is adapted from the [AngularJS Git Commit Guideline
Types define which kind of changes you made to the project.
| Types | Description |
| ------------- |-------------|
| BREAKING | Changes including breaking changes. |
| build | New build version. |
| chore | Changes to the build process or auxiliary tools such as changelog generation. No production code change. |
| ci | Changes related to continuous integration only (GitHub Actions, CircleCI, etc.). |
| docs | Documentation only changes. |
| feat | A new feature. |
| fix | A bug fix. |
| perf | A code change that improves performance. |
| refactor | A code change that neither fixes a bug nor adds a feature. |
| style | Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc.). |
| test | Adding missing or correcting existing tests. |
| Types | Description |
| -------- | -------------------------------------------------------------------------------------------------------- |
| BREAKING | Changes including breaking changes. |
| build | New build version. |
| chore | Changes to the build process or auxiliary tools such as changelog generation. No production code change. |
| ci | Changes related to continuous integration only (GitHub Actions, CircleCI, etc.). |
| docs | Documentation only changes. |
| feat | A new feature. |
| fix | A bug fix. |
| perf | A code change that improves performance. |
| refactor | A code change that neither fixes a bug nor adds a feature. |
| style | Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc.). |
| test | Adding missing or correcting existing tests. |
### Scopes
@ -150,4 +149,4 @@ The focus is not only limited to the activity you see on GitHub but also a lot o
## Spread the Word
Use [#LeonAI](https://twitter.com/search?f=live&q=%23LeonAI%20(from%3Agrenlouis%20OR%20from%3Alouistiti_fr)&src=typed_query) if you tweet about Leon and/or mention [@grenlouis](https://twitter.com/grenlouis).
Use [#LeonAI](<https://twitter.com/search?f=live&q=%23LeonAI%20(from%3Agrenlouis%20OR%20from%3Alouistiti_fr)&src=typed_query>) if you tweet about Leon and/or mention [@grenlouis](https://twitter.com/grenlouis).

View File

@ -12,6 +12,7 @@ Please place an x (no spaces - [x]) in all [ ] that apply.
-->
### Documentation Is:
- [ ] Missing
- [ ] Needed
- [ ] Confusing

View File

@ -14,6 +14,7 @@ Please place an x (no spaces - [x]) in all [ ] that apply.
-->
### What type of change does this PR introduce?
- [ ] Bugfix
- [ ] Feature
- [ ] Refactor
@ -21,10 +22,10 @@ Please place an x (no spaces - [x]) in all [ ] that apply.
- [ ] Not Sure?
### Does this PR introduce breaking changes?
- [ ] Yes
- [ ] No
### List any relevant issue numbers:
### Description:

View File

@ -1,327 +1,358 @@
# [1.0.0-beta.7](https://github.com/leon-ai/leon/compare/v1.0.0-beta.6...v1.0.0-beta.7) (2022-08-24) / A Much Better NLP
*Please [read this blog post](https://blog.getleon.ai/a-much-better-nlp-and-future-1-0-0-beta-7/) to know more about all the new features and the exciting future of Leon.*
_Please [read this blog post](https://blog.getleon.ai/a-much-better-nlp-and-future-1-0-0-beta-7/) to know more about all the new features and the exciting future of Leon._
### BREAKING CHANGES
- remove legacy packages [07743657](https://github.com/leon-ai/leon/commit/07743657cd2954e7f850c08eea7c032c24b28a96)
- remove legacy packages [07743657](https://github.com/leon-ai/leon/commit/07743657cd2954e7f850c08eea7c032c24b28a96)
### Features
- create new NLP skills resolvers model + NLP global resolvers model [602604e4](https://github.com/leon-ai/leon/commit/602604e43788c6b6be8c402d54fe54342d0cd5d6)
- better isolate skill resolvers from global resolvers + finish up Akinator skill [905d248e](https://github.com/leon-ai/leon/commit/905d248ebf7e84b1ccc74450520228aef9a8804a)
- transfer language from core to skills + support thematics on Akinator skill [b35a249b](https://github.com/leon-ai/leon/commit/b35a249bf68000d6708aaee4abc4cd97f5b80035)
- actions on slot level + akinator skill progress [7101b8b4](https://github.com/leon-ai/leon/commit/7101b8b4b828b49e009da2fcdac7c5ed2e48c8f8)
- add Cartesian sample training on resolvers + enum entities [6ed88a59](https://github.com/leon-ai/leon/commit/6ed88a5946c77b356e49fe8b9cbe890b8dd1f037)
- map skills resolvers intents [eb5ade76](https://github.com/leon-ai/leon/commit/eb5ade76844dd14f5d5a5c5eeb434eed70fe62f4)
- train skills resolvers and remap as per changes [82df0a3c](https://github.com/leon-ai/leon/commit/82df0a3c235fbd50ad0cfe12e23a51f777dcd658)
- achieve Cartesian training [a1e9011d](https://github.com/leon-ai/leon/commit/a1e9011d5db48ed8e9f49cef2d813ee7e2400ec2)
- introduce suggestions [dcddacca](https://github.com/leon-ai/leon/commit/dcddacca2956529de0aea8ff98e1e6f16104966a)
- communicate suggestions to the client [4b5a8835](https://github.com/leon-ai/leon/commit/4b5a883510fd4421a491f999cc21d8f7dd369a03)
- shared skills memory [795acc5b](https://github.com/leon-ai/leon/commit/795acc5bdd29e9a27d1cf3b4407453648d573973)
- support dynamic variables on skill NLU settings for logic type [10d10a16](https://github.com/leon-ai/leon/commit/10d10a1690cb65970932ee7230e3f324ec67dbce)
- tmp resolvers mapping [b1a332ba](https://github.com/leon-ai/leon/commit/b1a332bab6af8b74a8c58c07bac3ef3a1cebad89)
- start to map resolvers between the core and skills [e88495a9](https://github.com/leon-ai/leon/commit/e88495a9a94e86026fd0c7c4c44f3ff06edb2e80)
- train affirmation and denial resolver [993d52e8](https://github.com/leon-ai/leon/commit/993d52e8686f335039ff3d5e2a82c1a37efb1825)
- Python TCP server and Node.js TCP client for IPC (wip) [5970ec9e](https://github.com/leon-ai/leon/commit/5970ec9e8e4c2784c50e2ddc76b34b71aa4310e6)
- introduce spaCy for complete NER (wip) [caa86fc8](https://github.com/leon-ai/leon/commit/caa86fc8a6850b18f67ba7bedb423be693a88d17)
- slot filling (wip) [76547d94](https://github.com/leon-ai/leon/commit/76547d9411c32e0eb2ccfdac3a4901d2d2fb37f6)
- share data across domains [f4f9fff9](https://github.com/leon-ai/leon/commit/f4f9fff9783861be183990d7869973c7a30c8104)
- dynamic variable binding on NLG [0367b44f](https://github.com/leon-ai/leon/commit/0367b44f211c1629fffe6981a730f171707bf0c0)
- context and slot filling preparation (wip) [975b8ebc](https://github.com/leon-ai/leon/commit/975b8ebcf00db91b44dd067be6dde5c1bf32fff1)
- annotate entities on the fly + prepare for dialog skill type and cross-domains data [4107932d](https://github.com/leon-ai/leon/commit/4107932d000086188d6f44ef67b73cc322fc58e5)
- new NLP training [d8023308](https://github.com/leon-ai/leon/commit/d8023308d0ef1f3eede37f21f45daa2f893031b0)
- **server:**
- trigger next action suggestions or current ones [244d08c0](https://github.com/leon-ai/leon/commit/244d08c0bd0fea315269f52ab899f9b7fe083f51)
- introduce main NLP model and resolvers NLP model [e37526d9](https://github.com/leon-ai/leon/commit/e37526d9056d858ebcf17b81f6714f47b67c77cb)
- change log emojis [843bc428](https://github.com/leon-ai/leon/commit/843bc428b8deb397e2d051a8e0bfaf1b82b459a2)
- provide nextAction even when no slot is set and clean up NLU object on context switch [8377c63d](https://github.com/leon-ai/leon/commit/8377c63db4e4e42ed929171cd8b9abdb13c44b2a)
- report full traceback from skills execution [b69b1fea](https://github.com/leon-ai/leon/commit/b69b1fea16250421bc7d5def1c973dd43e453071)
- support on-the-fly entity annotation for built-in entities [567b030c](https://github.com/leon-ai/leon/commit/567b030c4fcf8df266c39cca61a146fb33b9e0fc)
- save slots within conversation context [fce47cdb](https://github.com/leon-ai/leon/commit/fce47cdbd570993ac5cca2b4ff5bc97969df4e40)
- resolve resolvers tmp [ceea47ff](https://github.com/leon-ai/leon/commit/ceea47ff7dd536bfd3adf3cc355e90e3e94b1cbd)
- prepare the next action on non-slot-filled skills [0acb31a9](https://github.com/leon-ai/leon/commit/0acb31a9c61c1c094b29f3d0ff2647d625eab0be)
- add more affirmative utterance samples [870ab2e8](https://github.com/leon-ai/leon/commit/870ab2e87eba2c548d38dc90d30553e7fa380c1e)
- restart a skill with the original utterance saved in context [f4446ef1](https://github.com/leon-ai/leon/commit/f4446ef17796d38d0f98d5b7e889503622a1a998)
- clean up context if the action loop does not meet the expected items [035c9d52](https://github.com/leon-ai/leon/commit/035c9d5240472ac19a84ae8c1a87844fa0d0af5d)
- add handsigns custom entity [1529c720](https://github.com/leon-ai/leon/commit/1529c72039092c7b8f37304d6064e04f2dc7b795)
- reprocess NLU in case of slot filling interruption [9e242d77](https://github.com/leon-ai/leon/commit/9e242d77d32109e9355eec422790a5a66fd18f9c)
- handle action loop when slots have all been filled at once [f8830502](https://github.com/leon-ai/leon/commit/f88305020a5bc79056b7ff9c1a31f8d3c3a7cdce)
- break the action loop from the skill [27dc801c](https://github.com/leon-ai/leon/commit/27dc801cf53de5af3d54b95f42d2b9e627090867)
- stop action loop from skill to core [99681e25](https://github.com/leon-ai/leon/commit/99681e257795a18361be379b93244088401f640b)
- introduce basic concept of action loop [c5b38400](https://github.com/leon-ai/leon/commit/c5b38400821e5bc5edc4402d007f815f24319d44)
- prepare action loop feature [19e1aa22](https://github.com/leon-ai/leon/commit/19e1aa22f6e989e90eb745e3a7b7ccb8ff5adbfa)
- add current utterance entities to differentiate from the whole context [8b56a185](https://github.com/leon-ai/leon/commit/8b56a1850c9d76e335f1bad1b4395d73ddc5ea19)
- when a context is activated, pick up the most probable classification [8e186879](https://github.com/leon-ai/leon/commit/8e1868798c8750c19b1719a44dc6fb8bca68b250)
- persist entities into contexts [87575773](https://github.com/leon-ai/leon/commit/875757739f6701f54805eeff2c7c350cff36c4ac)
- forward slots to skill + add original utterance [68e40f65](https://github.com/leon-ai/leon/commit/68e40f65df0d1fe29ccad991868a2408c6e1015e)
- handle case when all slots have been filled in one utterance [22e9234b](https://github.com/leon-ai/leon/commit/22e9234b3d2c97e83eaafaeeb5aa9d27c351c95a)
- trigger next action once all slots have been filled [9b870010](https://github.com/leon-ai/leon/commit/9b870010dd929bc1aed6d87696f1cc4e9f177c0b)
- complete slot filling before triggering the next action [9124687e](https://github.com/leon-ai/leon/commit/9124687eb0e17295a30f860752ee622ba44d1440)
- from modules to skills with type at the actions level [77ebaf4a](https://github.com/leon-ai/leon/commit/77ebaf4a9c78b2e471d39872e361ea05b163580d)
- verify if all slots are filled [e27c1b9c](https://github.com/leon-ai/leon/commit/e27c1b9c8f5c2f668f464f152ad227d65ba5ef6b)
- context and slot filling, keep context and await for entities [25adf406](https://github.com/leon-ai/leon/commit/25adf406c810e48b1277105dd6c269a2ed601d28)
- unstack oldest context [1ece25a4](https://github.com/leon-ai/leon/commit/1ece25a497acc9f9876fe158ace5da38beec31e6)
- context setup with slot for each conversation (wip) [8257eb87](https://github.com/leon-ai/leon/commit/8257eb8792c9f4fc90bcc1b393d3fddf8ff541dc)
- resolve slots from slot filling [960a6dc7](https://github.com/leon-ai/leon/commit/960a6dc71c2efb50ad6a8448d447ebd79c559c41)
- pickup questions for slot filling [3bbc2f8a](https://github.com/leon-ai/leon/commit/3bbc2f8a254d10f0c37cdb7abf016b3e418f594a)
- main slots structure (wip) [1d9b1809](https://github.com/leon-ai/leon/commit/1d9b18093b6e042ae49f557149a7822b4420cdb8)
- introduce resolvers for slot filling (wip) [334bf393](https://github.com/leon-ai/leon/commit/334bf393f2c43edd326d9de2e93c037ffeebeab5)
- slot filling PoC (tmp wip) [95bfcfe4](https://github.com/leon-ai/leon/commit/95bfcfe422f21a2946e50031a3623675dfe81b9d)
- slot filling (wip) [969a83e6](https://github.com/leon-ai/leon/commit/969a83e6081de20ec5e2bdd0329a21a3fe448f13)
- trigger unsupported language [1845eed7](https://github.com/leon-ai/leon/commit/1845eed71dadd5f693d76abd7633864014bf8af1)
- context (wip) [d1c2a11d](https://github.com/leon-ai/leon/commit/d1c2a11d8284ca4e1d4563b871c50c006e8ef8a0)
- context (wip) [a9a43ac4](https://github.com/leon-ai/leon/commit/a9a43ac478c46f3832d2af49c287bb574a70cc14)
- differenciate cities from countries for location entities [bf9bf231](https://github.com/leon-ai/leon/commit/bf9bf231f714e1edc1417e43af12fa54c00ba064)
- auto restart the TCP server when language is switching [9be7c700](https://github.com/leon-ai/leon/commit/9be7c700767672ac6e0c875d3b5ae7fa6414e4fa)
- support multi languages on TCP server [a808742c](https://github.com/leon-ai/leon/commit/a808742c927d45c18df45af133e67c98d4a0415a)
- add auto reconnect on TCP client [cbe89ed6](https://github.com/leon-ai/leon/commit/cbe89ed6ccfd727356eb34078a8a4348b2fd696f)
- make TCP client global [006e9fb0](https://github.com/leon-ai/leon/commit/006e9fb01148c2107f6acc6a562ace4809da92be)
- fully implement low-level networking for IPC [8acb82da](https://github.com/leon-ai/leon/commit/8acb82da9bacdb9b7952c4a4d130d094e07def5e)
- more accurate NLG [d5577b1e](https://github.com/leon-ai/leon/commit/d5577b1ef5cf1b8b4a924636ba4425b8b4ae133d)
- unknown_answers fallback on dialog type [28efe6e7](https://github.com/leon-ai/leon/commit/28efe6e7d542f19bf12ddede1815f7fa8cf01036)
- deep data mapping on enum NER [3ca48265](https://github.com/leon-ai/leon/commit/3ca48265e7115c8e0f02c65ba92d90412325ad76)
- NLG and entities mapping [8f2f935b](https://github.com/leon-ai/leon/commit/8f2f935b949ceb965941460d4ff1ed0084b72442)
- bootstrap skill structure [fe90c68e](https://github.com/leon-ai/leon/commit/fe90c68ea0e9b0e857b62aa9f3b0a42ba1ffed6b)
- on-the-fly language switching [f24513a2](https://github.com/leon-ai/leon/commit/f24513a22395d1903e485883f4813cdceccdbd18)
- new NLP containers [34b2aa56](https://github.com/leon-ai/leon/commit/34b2aa5655e55284d59db4569960c49965a0483c)
- (WIP) NLU refactoring [ca3f5f42](https://github.com/leon-ai/leon/commit/ca3f5f42da26eb634e10b56e9b84bd45b5543024)
- add skills domains [cf2a28aa](https://github.com/leon-ai/leon/commit/cf2a28aac2d936cc15e6aa9aa13747015d952053)
- **skill/akinator:**
- finish up [79e7df02](https://github.com/leon-ai/leon/commit/79e7df022f7daedf43db7f892e049a31924ce985)
- finished main business logic [76cae42f](https://github.com/leon-ai/leon/commit/76cae42fdeac0edcd3ebd6aa7718728617687b1b)
- backbone [02a2f714](https://github.com/leon-ai/leon/commit/02a2f71470bb4c0c6ca04526e89461d863d17145)
- **skill/birthday:**
remove birthday skill [be0b345d](https://github.com/leon-ai/leon/commit/be0b345d3f7fea562548e3fbed62b65c32eff4c0)
- **skill/color:**
introduce color skill [ce00989b](https://github.com/leon-ai/leon/commit/ce00989b01f65c5cbb5a2e13f454207c1ba7741c)
- **skill/guess_the_number:**
introduce the Guess the Number skill [fba80966](https://github.com/leon-ai/leon/commit/fba80966c937a32182e48670c47358babb539d64)
- **skill/introduction:**
- add one utterance sample [af0fdd1e](https://github.com/leon-ai/leon/commit/af0fdd1e18975bf8b60abb2957ddf79831281817)
- ask about owner info if necessary [c5cc9bdd](https://github.com/leon-ai/leon/commit/c5cc9bdd52afaaa710f9476d1e9918f3d168e243)
- **skill/mbti:**
- complete form resolver [aad9f3f1](https://github.com/leon-ai/leon/commit/aad9f3f1ef61499d438ea40c9d2d95764667678d)
- finish business logic [99a3f103](https://github.com/leon-ai/leon/commit/99a3f103e00b5a58745ee851d2fa95c61871f75a)
- questions mapping [ae4f69f7](https://github.com/leon-ai/leon/commit/ae4f69f7c7189ff75e004f68c9a2a8b6bb37b6bd)
- complete questionnaire [7f1f8871](https://github.com/leon-ai/leon/commit/7f1f8871598746c5475b24e086ea6e581f2a988e)
- main logic backbone [33109a4c](https://github.com/leon-ai/leon/commit/33109a4c8b5df82e7b98e48e66f8d53f0cc114fb)
- main NLU structure [skip ci] [86d5040a](https://github.com/leon-ai/leon/commit/86d5040a7dc2006036c7e67a2cf54a4c992e64aa)
- **skill/rochambeau:**
- add start answers [192dd0a8](https://github.com/leon-ai/leon/commit/192dd0a87ab5dc025bb90b20b187e36a58be54ea)
- introduce paper scissors rock [57370470](https://github.com/leon-ai/leon/commit/573704706c843d870f2498146bc3cd659bab4f06)
- init [7f5e30ac](https://github.com/leon-ai/leon/commit/7f5e30ac82f2a2d7579e361229a4044348915867)
- **web app:**
- join us on Discord [141c89ec](https://github.com/leon-ai/leon/commit/141c89ecbfd329a8e63d5a603d0ae6b42f9abf38)
- wait for TCP client to be connected first [bc228a68](https://github.com/leon-ai/leon/commit/bc228a68600c07871c489d6624bbc837971079a6)
### Bug Fixes
- check script with new intent-object format [fdf0a389](https://github.com/leon-ai/leon/commit/fdf0a389b76caba5dd47996a43a34c0c7821c70a)
- check new resolvers paths [cfd8f7cb](https://github.com/leon-ai/leon/commit/cfd8f7cbe5e8fd9ce3d1659c725d7af261db8d71)
- use ports.ubuntu.com mirror for the offline TTS [skip ci] [3dd90396](https://github.com/leon-ai/leon/commit/3dd9039678820fceb7ccbb1c96358c8d2f188ede)
- set skill config only when a bridge is set [7513aa7d](https://github.com/leon-ai/leon/commit/7513aa7d20fee1fe9ca5442a7909d22fd1c3b39e)
- only set skill config when it is a logic type [9ce9a8bc](https://github.com/leon-ai/leon/commit/9ce9a8bc4fe0864730a08d8e9a436982f1365aa5)
- **docker:**
- usage of Ubuntu base image with pyenv and nvm (#408) [f507f6f7](https://github.com/leon-ai/leon/commit/f507f6f7e499f56768b3e624164cbcd58193b153)
- check should not allocate a pseudo-TTY (#359) [4372b45f](https://github.com/leon-ai/leon/commit/4372b45fc605893d4130cf7110dd87519b934345)
- create new NLP skills resolvers model + NLP global resolvers model [602604e4](https://github.com/leon-ai/leon/commit/602604e43788c6b6be8c402d54fe54342d0cd5d6)
- better isolate skill resolvers from global resolvers + finish up Akinator skill [905d248e](https://github.com/leon-ai/leon/commit/905d248ebf7e84b1ccc74450520228aef9a8804a)
- transfer language from core to skills + support thematics on Akinator skill [b35a249b](https://github.com/leon-ai/leon/commit/b35a249bf68000d6708aaee4abc4cd97f5b80035)
- actions on slot level + akinator skill progress [7101b8b4](https://github.com/leon-ai/leon/commit/7101b8b4b828b49e009da2fcdac7c5ed2e48c8f8)
- add Cartesian sample training on resolvers + enum entities [6ed88a59](https://github.com/leon-ai/leon/commit/6ed88a5946c77b356e49fe8b9cbe890b8dd1f037)
- map skills resolvers intents [eb5ade76](https://github.com/leon-ai/leon/commit/eb5ade76844dd14f5d5a5c5eeb434eed70fe62f4)
- train skills resolvers and remap as per changes [82df0a3c](https://github.com/leon-ai/leon/commit/82df0a3c235fbd50ad0cfe12e23a51f777dcd658)
- achieve Cartesian training [a1e9011d](https://github.com/leon-ai/leon/commit/a1e9011d5db48ed8e9f49cef2d813ee7e2400ec2)
- introduce suggestions [dcddacca](https://github.com/leon-ai/leon/commit/dcddacca2956529de0aea8ff98e1e6f16104966a)
- communicate suggestions to the client [4b5a8835](https://github.com/leon-ai/leon/commit/4b5a883510fd4421a491f999cc21d8f7dd369a03)
- shared skills memory [795acc5b](https://github.com/leon-ai/leon/commit/795acc5bdd29e9a27d1cf3b4407453648d573973)
- support dynamic variables on skill NLU settings for logic type [10d10a16](https://github.com/leon-ai/leon/commit/10d10a1690cb65970932ee7230e3f324ec67dbce)
- tmp resolvers mapping [b1a332ba](https://github.com/leon-ai/leon/commit/b1a332bab6af8b74a8c58c07bac3ef3a1cebad89)
- start to map resolvers between the core and skills [e88495a9](https://github.com/leon-ai/leon/commit/e88495a9a94e86026fd0c7c4c44f3ff06edb2e80)
- train affirmation and denial resolver [993d52e8](https://github.com/leon-ai/leon/commit/993d52e8686f335039ff3d5e2a82c1a37efb1825)
- Python TCP server and Node.js TCP client for IPC (wip) [5970ec9e](https://github.com/leon-ai/leon/commit/5970ec9e8e4c2784c50e2ddc76b34b71aa4310e6)
- introduce spaCy for complete NER (wip) [caa86fc8](https://github.com/leon-ai/leon/commit/caa86fc8a6850b18f67ba7bedb423be693a88d17)
- slot filling (wip) [76547d94](https://github.com/leon-ai/leon/commit/76547d9411c32e0eb2ccfdac3a4901d2d2fb37f6)
- share data across domains [f4f9fff9](https://github.com/leon-ai/leon/commit/f4f9fff9783861be183990d7869973c7a30c8104)
- dynamic variable binding on NLG [0367b44f](https://github.com/leon-ai/leon/commit/0367b44f211c1629fffe6981a730f171707bf0c0)
- context and slot filling preparation (wip) [975b8ebc](https://github.com/leon-ai/leon/commit/975b8ebcf00db91b44dd067be6dde5c1bf32fff1)
- annotate entities on the fly + prepare for dialog skill type and cross-domains data [4107932d](https://github.com/leon-ai/leon/commit/4107932d000086188d6f44ef67b73cc322fc58e5)
- new NLP training [d8023308](https://github.com/leon-ai/leon/commit/d8023308d0ef1f3eede37f21f45daa2f893031b0)
- **server:**
- make leon handle multiple socket.io-client instances [6e7c0aac](https://github.com/leon-ai/leon/commit/6e7c0aac57008b152b45f1b0f3886ae38777467b)
- fallback on global resolver during resolver classification [ec77dd0f](https://github.com/leon-ai/leon/commit/ec77dd0f02a8ae94fb3f02c7b7847b5509d71406)
- make use of current entities to match global entities [a8d82050](https://github.com/leon-ai/leon/commit/a8d82050c86b5c24c4c898c06e5ffc3882524c0b)
- multiple slots filling [2ac1bc63](https://github.com/leon-ai/leon/commit/2ac1bc63ccd11757d586adfb2e75ce04e3ffbcb5)
- context switching on action loop [6712ae55](https://github.com/leon-ai/leon/commit/6712ae5539ef44ed33e360cfcad71c760c4b13b1)
- check one-shot slot filling case causing infinite loop [782a3aaa](https://github.com/leon-ai/leon/commit/782a3aaa0a07dda667557bc84db906b3fa9b237c)
- clean up active context after all slots have been filled [faabc2c7](https://github.com/leon-ai/leon/commit/faabc2c7b0992fcea035eedf66103d84b101e1a7)
- correctly extract all spaCy entities [6aa60bfb](https://github.com/leon-ai/leon/commit/6aa60bfbd8c72e678fe3faf5e7f9dbd37dfd209f)
- intent not found [8280c658](https://github.com/leon-ai/leon/commit/8280c65897dba0fe470a3589d151b391c51e344e)
- fallback due to modules to skills refactoring [ef0c54b2](https://github.com/leon-ai/leon/commit/ef0c54b22667ef2bd1d2c07003f6b4beb5fa25c0)
- NER due to modules to skills refactoring [e4d3904c](https://github.com/leon-ai/leon/commit/e4d3904ceeb2a3ee2c0187a1817331fac916e1a7)
- **skill/akinator:**
remove direct end on guess action [f6461f73](https://github.com/leon-ai/leon/commit/f6461f733b4a5d944dfa4a987dd1109628c6cbca)
- **skill/color:**
more appropriate answer [cb18ed63](https://github.com/leon-ai/leon/commit/cb18ed6397cb0e0ad8fbea30c57d7d40137441ee)
- **skill/rochambeau:**
final logic [0ebc0518](https://github.com/leon-ai/leon/commit/0ebc0518e61b899c35dd13df65a43f69399e784d)
- trigger next action suggestions or current ones [244d08c0](https://github.com/leon-ai/leon/commit/244d08c0bd0fea315269f52ab899f9b7fe083f51)
- introduce main NLP model and resolvers NLP model [e37526d9](https://github.com/leon-ai/leon/commit/e37526d9056d858ebcf17b81f6714f47b67c77cb)
- change log emojis [843bc428](https://github.com/leon-ai/leon/commit/843bc428b8deb397e2d051a8e0bfaf1b82b459a2)
- provide nextAction even when no slot is set and clean up NLU object on context switch [8377c63d](https://github.com/leon-ai/leon/commit/8377c63db4e4e42ed929171cd8b9abdb13c44b2a)
- report full traceback from skills execution [b69b1fea](https://github.com/leon-ai/leon/commit/b69b1fea16250421bc7d5def1c973dd43e453071)
- support on-the-fly entity annotation for built-in entities [567b030c](https://github.com/leon-ai/leon/commit/567b030c4fcf8df266c39cca61a146fb33b9e0fc)
- save slots within conversation context [fce47cdb](https://github.com/leon-ai/leon/commit/fce47cdbd570993ac5cca2b4ff5bc97969df4e40)
- resolve resolvers tmp [ceea47ff](https://github.com/leon-ai/leon/commit/ceea47ff7dd536bfd3adf3cc355e90e3e94b1cbd)
- prepare the next action on non-slot-filled skills [0acb31a9](https://github.com/leon-ai/leon/commit/0acb31a9c61c1c094b29f3d0ff2647d625eab0be)
- add more affirmative utterance samples [870ab2e8](https://github.com/leon-ai/leon/commit/870ab2e87eba2c548d38dc90d30553e7fa380c1e)
- restart a skill with the original utterance saved in context [f4446ef1](https://github.com/leon-ai/leon/commit/f4446ef17796d38d0f98d5b7e889503622a1a998)
- clean up context if the action loop does not meet the expected items [035c9d52](https://github.com/leon-ai/leon/commit/035c9d5240472ac19a84ae8c1a87844fa0d0af5d)
- add handsigns custom entity [1529c720](https://github.com/leon-ai/leon/commit/1529c72039092c7b8f37304d6064e04f2dc7b795)
- reprocess NLU in case of slot filling interruption [9e242d77](https://github.com/leon-ai/leon/commit/9e242d77d32109e9355eec422790a5a66fd18f9c)
- handle action loop when slots have all been filled at once [f8830502](https://github.com/leon-ai/leon/commit/f88305020a5bc79056b7ff9c1a31f8d3c3a7cdce)
- break the action loop from the skill [27dc801c](https://github.com/leon-ai/leon/commit/27dc801cf53de5af3d54b95f42d2b9e627090867)
- stop action loop from skill to core [99681e25](https://github.com/leon-ai/leon/commit/99681e257795a18361be379b93244088401f640b)
- introduce basic concept of action loop [c5b38400](https://github.com/leon-ai/leon/commit/c5b38400821e5bc5edc4402d007f815f24319d44)
- prepare action loop feature [19e1aa22](https://github.com/leon-ai/leon/commit/19e1aa22f6e989e90eb745e3a7b7ccb8ff5adbfa)
- add current utterance entities to differentiate from the whole context [8b56a185](https://github.com/leon-ai/leon/commit/8b56a1850c9d76e335f1bad1b4395d73ddc5ea19)
- when a context is activated, pick up the most probable classification [8e186879](https://github.com/leon-ai/leon/commit/8e1868798c8750c19b1719a44dc6fb8bca68b250)
- persist entities into contexts [87575773](https://github.com/leon-ai/leon/commit/875757739f6701f54805eeff2c7c350cff36c4ac)
- forward slots to skill + add original utterance [68e40f65](https://github.com/leon-ai/leon/commit/68e40f65df0d1fe29ccad991868a2408c6e1015e)
- handle case when all slots have been filled in one utterance [22e9234b](https://github.com/leon-ai/leon/commit/22e9234b3d2c97e83eaafaeeb5aa9d27c351c95a)
- trigger next action once all slots have been filled [9b870010](https://github.com/leon-ai/leon/commit/9b870010dd929bc1aed6d87696f1cc4e9f177c0b)
- complete slot filling before triggering the next action [9124687e](https://github.com/leon-ai/leon/commit/9124687eb0e17295a30f860752ee622ba44d1440)
- from modules to skills with type at the actions level [77ebaf4a](https://github.com/leon-ai/leon/commit/77ebaf4a9c78b2e471d39872e361ea05b163580d)
- verify if all slots are filled [e27c1b9c](https://github.com/leon-ai/leon/commit/e27c1b9c8f5c2f668f464f152ad227d65ba5ef6b)
- context and slot filling, keep context and await for entities [25adf406](https://github.com/leon-ai/leon/commit/25adf406c810e48b1277105dd6c269a2ed601d28)
- unstack oldest context [1ece25a4](https://github.com/leon-ai/leon/commit/1ece25a497acc9f9876fe158ace5da38beec31e6)
- context setup with slot for each conversation (wip) [8257eb87](https://github.com/leon-ai/leon/commit/8257eb8792c9f4fc90bcc1b393d3fddf8ff541dc)
- resolve slots from slot filling [960a6dc7](https://github.com/leon-ai/leon/commit/960a6dc71c2efb50ad6a8448d447ebd79c559c41)
- pickup questions for slot filling [3bbc2f8a](https://github.com/leon-ai/leon/commit/3bbc2f8a254d10f0c37cdb7abf016b3e418f594a)
- main slots structure (wip) [1d9b1809](https://github.com/leon-ai/leon/commit/1d9b18093b6e042ae49f557149a7822b4420cdb8)
- introduce resolvers for slot filling (wip) [334bf393](https://github.com/leon-ai/leon/commit/334bf393f2c43edd326d9de2e93c037ffeebeab5)
- slot filling PoC (tmp wip) [95bfcfe4](https://github.com/leon-ai/leon/commit/95bfcfe422f21a2946e50031a3623675dfe81b9d)
- slot filling (wip) [969a83e6](https://github.com/leon-ai/leon/commit/969a83e6081de20ec5e2bdd0329a21a3fe448f13)
- trigger unsupported language [1845eed7](https://github.com/leon-ai/leon/commit/1845eed71dadd5f693d76abd7633864014bf8af1)
- context (wip) [d1c2a11d](https://github.com/leon-ai/leon/commit/d1c2a11d8284ca4e1d4563b871c50c006e8ef8a0)
- context (wip) [a9a43ac4](https://github.com/leon-ai/leon/commit/a9a43ac478c46f3832d2af49c287bb574a70cc14)
- differenciate cities from countries for location entities [bf9bf231](https://github.com/leon-ai/leon/commit/bf9bf231f714e1edc1417e43af12fa54c00ba064)
- auto restart the TCP server when language is switching [9be7c700](https://github.com/leon-ai/leon/commit/9be7c700767672ac6e0c875d3b5ae7fa6414e4fa)
- support multi languages on TCP server [a808742c](https://github.com/leon-ai/leon/commit/a808742c927d45c18df45af133e67c98d4a0415a)
- add auto reconnect on TCP client [cbe89ed6](https://github.com/leon-ai/leon/commit/cbe89ed6ccfd727356eb34078a8a4348b2fd696f)
- make TCP client global [006e9fb0](https://github.com/leon-ai/leon/commit/006e9fb01148c2107f6acc6a562ace4809da92be)
- fully implement low-level networking for IPC [8acb82da](https://github.com/leon-ai/leon/commit/8acb82da9bacdb9b7952c4a4d130d094e07def5e)
- more accurate NLG [d5577b1e](https://github.com/leon-ai/leon/commit/d5577b1ef5cf1b8b4a924636ba4425b8b4ae133d)
- unknown_answers fallback on dialog type [28efe6e7](https://github.com/leon-ai/leon/commit/28efe6e7d542f19bf12ddede1815f7fa8cf01036)
- deep data mapping on enum NER [3ca48265](https://github.com/leon-ai/leon/commit/3ca48265e7115c8e0f02c65ba92d90412325ad76)
- NLG and entities mapping [8f2f935b](https://github.com/leon-ai/leon/commit/8f2f935b949ceb965941460d4ff1ed0084b72442)
- bootstrap skill structure [fe90c68e](https://github.com/leon-ai/leon/commit/fe90c68ea0e9b0e857b62aa9f3b0a42ba1ffed6b)
- on-the-fly language switching [f24513a2](https://github.com/leon-ai/leon/commit/f24513a22395d1903e485883f4813cdceccdbd18)
- new NLP containers [34b2aa56](https://github.com/leon-ai/leon/commit/34b2aa5655e55284d59db4569960c49965a0483c)
- (WIP) NLU refactoring [ca3f5f42](https://github.com/leon-ai/leon/commit/ca3f5f42da26eb634e10b56e9b84bd45b5543024)
- add skills domains [cf2a28aa](https://github.com/leon-ai/leon/commit/cf2a28aac2d936cc15e6aa9aa13747015d952053)
- **skill/akinator:**
- finish up [79e7df02](https://github.com/leon-ai/leon/commit/79e7df022f7daedf43db7f892e049a31924ce985)
- finished main business logic [76cae42f](https://github.com/leon-ai/leon/commit/76cae42fdeac0edcd3ebd6aa7718728617687b1b)
- backbone [02a2f714](https://github.com/leon-ai/leon/commit/02a2f71470bb4c0c6ca04526e89461d863d17145)
- **skill/birthday:**
remove birthday skill [be0b345d](https://github.com/leon-ai/leon/commit/be0b345d3f7fea562548e3fbed62b65c32eff4c0)
- **skill/color:**
introduce color skill [ce00989b](https://github.com/leon-ai/leon/commit/ce00989b01f65c5cbb5a2e13f454207c1ba7741c)
- **skill/guess_the_number:**
introduce the Guess the Number skill [fba80966](https://github.com/leon-ai/leon/commit/fba80966c937a32182e48670c47358babb539d64)
- **skill/introduction:**
- add one utterance sample [af0fdd1e](https://github.com/leon-ai/leon/commit/af0fdd1e18975bf8b60abb2957ddf79831281817)
- ask about owner info if necessary [c5cc9bdd](https://github.com/leon-ai/leon/commit/c5cc9bdd52afaaa710f9476d1e9918f3d168e243)
- **skill/mbti:**
- complete form resolver [aad9f3f1](https://github.com/leon-ai/leon/commit/aad9f3f1ef61499d438ea40c9d2d95764667678d)
- finish business logic [99a3f103](https://github.com/leon-ai/leon/commit/99a3f103e00b5a58745ee851d2fa95c61871f75a)
- questions mapping [ae4f69f7](https://github.com/leon-ai/leon/commit/ae4f69f7c7189ff75e004f68c9a2a8b6bb37b6bd)
- complete questionnaire [7f1f8871](https://github.com/leon-ai/leon/commit/7f1f8871598746c5475b24e086ea6e581f2a988e)
- main logic backbone [33109a4c](https://github.com/leon-ai/leon/commit/33109a4c8b5df82e7b98e48e66f8d53f0cc114fb)
- main NLU structure [skip ci] [86d5040a](https://github.com/leon-ai/leon/commit/86d5040a7dc2006036c7e67a2cf54a4c992e64aa)
- **skill/rochambeau:**
- add start answers [192dd0a8](https://github.com/leon-ai/leon/commit/192dd0a87ab5dc025bb90b20b187e36a58be54ea)
- introduce paper scissors rock [57370470](https://github.com/leon-ai/leon/commit/573704706c843d870f2498146bc3cd659bab4f06)
- init [7f5e30ac](https://github.com/leon-ai/leon/commit/7f5e30ac82f2a2d7579e361229a4044348915867)
- **web app:**
- join us on Discord [141c89ec](https://github.com/leon-ai/leon/commit/141c89ecbfd329a8e63d5a603d0ae6b42f9abf38)
- wait for TCP client to be connected first [bc228a68](https://github.com/leon-ai/leon/commit/bc228a68600c07871c489d6624bbc837971079a6)
### Bug Fixes
- check script with new intent-object format [fdf0a389](https://github.com/leon-ai/leon/commit/fdf0a389b76caba5dd47996a43a34c0c7821c70a)
- check new resolvers paths [cfd8f7cb](https://github.com/leon-ai/leon/commit/cfd8f7cbe5e8fd9ce3d1659c725d7af261db8d71)
- use ports.ubuntu.com mirror for the offline TTS [skip ci] [3dd90396](https://github.com/leon-ai/leon/commit/3dd9039678820fceb7ccbb1c96358c8d2f188ede)
- set skill config only when a bridge is set [7513aa7d](https://github.com/leon-ai/leon/commit/7513aa7d20fee1fe9ca5442a7909d22fd1c3b39e)
- only set skill config when it is a logic type [9ce9a8bc](https://github.com/leon-ai/leon/commit/9ce9a8bc4fe0864730a08d8e9a436982f1365aa5)
- **docker:**
- usage of Ubuntu base image with pyenv and nvm (#408) [f507f6f7](https://github.com/leon-ai/leon/commit/f507f6f7e499f56768b3e624164cbcd58193b153)
- check should not allocate a pseudo-TTY (#359) [4372b45f](https://github.com/leon-ai/leon/commit/4372b45fc605893d4130cf7110dd87519b934345)
- **server:**
- make leon handle multiple socket.io-client instances [6e7c0aac](https://github.com/leon-ai/leon/commit/6e7c0aac57008b152b45f1b0f3886ae38777467b)
- fallback on global resolver during resolver classification [ec77dd0f](https://github.com/leon-ai/leon/commit/ec77dd0f02a8ae94fb3f02c7b7847b5509d71406)
- make use of current entities to match global entities [a8d82050](https://github.com/leon-ai/leon/commit/a8d82050c86b5c24c4c898c06e5ffc3882524c0b)
- multiple slots filling [2ac1bc63](https://github.com/leon-ai/leon/commit/2ac1bc63ccd11757d586adfb2e75ce04e3ffbcb5)
- context switching on action loop [6712ae55](https://github.com/leon-ai/leon/commit/6712ae5539ef44ed33e360cfcad71c760c4b13b1)
- check one-shot slot filling case causing infinite loop [782a3aaa](https://github.com/leon-ai/leon/commit/782a3aaa0a07dda667557bc84db906b3fa9b237c)
- clean up active context after all slots have been filled [faabc2c7](https://github.com/leon-ai/leon/commit/faabc2c7b0992fcea035eedf66103d84b101e1a7)
- correctly extract all spaCy entities [6aa60bfb](https://github.com/leon-ai/leon/commit/6aa60bfbd8c72e678fe3faf5e7f9dbd37dfd209f)
- intent not found [8280c658](https://github.com/leon-ai/leon/commit/8280c65897dba0fe470a3589d151b391c51e344e)
- fallback due to modules to skills refactoring [ef0c54b2](https://github.com/leon-ai/leon/commit/ef0c54b22667ef2bd1d2c07003f6b4beb5fa25c0)
- NER due to modules to skills refactoring [e4d3904c](https://github.com/leon-ai/leon/commit/e4d3904ceeb2a3ee2c0187a1817331fac916e1a7)
- **skill/akinator:**
remove direct end on guess action [f6461f73](https://github.com/leon-ai/leon/commit/f6461f733b4a5d944dfa4a987dd1109628c6cbca)
- **skill/color:**
more appropriate answer [cb18ed63](https://github.com/leon-ai/leon/commit/cb18ed6397cb0e0ad8fbea30c57d7d40137441ee)
- **skill/rochambeau:**
final logic [0ebc0518](https://github.com/leon-ai/leon/commit/0ebc0518e61b899c35dd13df65a43f69399e784d)
### Performance Improvements
- check Pipfile instead of Pipfile.lock to judge whether Python packages must be installed [afdb71f7](https://github.com/leon-ai/leon/commit/afdb71f766f2956c5cb4a5e0be9025340d1a89db)
- check Pipfile instead of Pipfile.lock to judge whether Python packages must be installed [afdb71f7](https://github.com/leon-ai/leon/commit/afdb71f766f2956c5cb4a5e0be9025340d1a89db)
### Documentation Changes
- change newsletter link [4bf2a9af](https://github.com/leon-ai/leon/commit/4bf2a9af963f75aeff96f4a43da8ec1024ac583a)
- README - Edited sentence for clarity (#389) [e83a1c42](https://github.com/leon-ai/leon/commit/e83a1c4230897e8b63251ef86225cf773148c38e)
- edit newsletter link [fa558a44](https://github.com/leon-ai/leon/commit/fa558a447ade4071f352d56f14602690ed90f521)
- update sponsor [skip ci] [f30ddb6b](https://github.com/leon-ai/leon/commit/f30ddb6be5f531df2b0042be0ed5ffbe79f73b07)
- remove sponsor [skip ci] [5dbc010f](https://github.com/leon-ai/leon/commit/5dbc010fa643279a24081f3148022e2211af63f4)
- remove sponsor [skip ci] [f36dd20f](https://github.com/leon-ai/leon/commit/f36dd20f822cd33c9e8a03efc2849c8d8d1fc75e)
- remove sponsor [skip ci] [5ee57ddf](https://github.com/leon-ai/leon/commit/5ee57ddf2a9f7817ec35b2e70d49e5bb422d8f78)
- add @ant-media sponsor [skip ci] [b47cbc3a](https://github.com/leon-ai/leon/commit/b47cbc3a5ecb6591f7abb4f62feae8102b9a6468)
- add long dev notice to README [skip ci] [499be77d](https://github.com/leon-ai/leon/commit/499be77d509231b853f591e27f726381da5a50d8)
- move sponsor to new section [skip ci] [8825d687](https://github.com/leon-ai/leon/commit/8825d6877c19d86495e89a858b859b7ab1f9ae37)
- change Twitter handle [skip ci] [c1afc11c](https://github.com/leon-ai/leon/commit/c1afc11cdb283526540d0fecdf83efddf3f3a9f7)
- remove sponsor [skip ci] [99b401a6](https://github.com/leon-ai/leon/commit/99b401a668a6fb248e33c22782940402be7c9b17)
- add new sponsor self-hosted img [skip ci] [238d928c](https://github.com/leon-ai/leon/commit/238d928cace13d4ecd174ca14b136967d8845e0f)
- remove new sponsor link (broken) [skip ci] [254f2848](https://github.com/leon-ai/leon/commit/254f2848aab622b79cce16d10c58d53ff6db9a8f)
- in GitHub BUG.md from modules to skills [4a5480a3](https://github.com/leon-ai/leon/commit/4a5480a3ccc54ee34d42f6edcec2a40224dee7ed)
- change @FluxIndustries sponsorship [skip ci] [1a118b71](https://github.com/leon-ai/leon/commit/1a118b718e5d4ade123756ac94758a01c50b12ae)
- add @FluxIndustries sponsor [skip ci] [9a604d7c](https://github.com/leon-ai/leon/commit/9a604d7ccc0c6aaec257299078141dd0c3077933)
- new #LeonAI link [skip ci] [a0107d62](https://github.com/leon-ai/leon/commit/a0107d629473f7fd057d367926e83822d46f1227)
- changelog new version diff link fix [skip ci] [e14c2498](https://github.com/leon-ai/leon/commit/e14c249826db92af7b85422e566be6aa834a7fb7)
- change newsletter link [4bf2a9af](https://github.com/leon-ai/leon/commit/4bf2a9af963f75aeff96f4a43da8ec1024ac583a)
- README - Edited sentence for clarity (#389) [e83a1c42](https://github.com/leon-ai/leon/commit/e83a1c4230897e8b63251ef86225cf773148c38e)
- edit newsletter link [fa558a44](https://github.com/leon-ai/leon/commit/fa558a447ade4071f352d56f14602690ed90f521)
- update sponsor [skip ci] [f30ddb6b](https://github.com/leon-ai/leon/commit/f30ddb6be5f531df2b0042be0ed5ffbe79f73b07)
- remove sponsor [skip ci] [5dbc010f](https://github.com/leon-ai/leon/commit/5dbc010fa643279a24081f3148022e2211af63f4)
- remove sponsor [skip ci] [f36dd20f](https://github.com/leon-ai/leon/commit/f36dd20f822cd33c9e8a03efc2849c8d8d1fc75e)
- remove sponsor [skip ci] [5ee57ddf](https://github.com/leon-ai/leon/commit/5ee57ddf2a9f7817ec35b2e70d49e5bb422d8f78)
- add @ant-media sponsor [skip ci] [b47cbc3a](https://github.com/leon-ai/leon/commit/b47cbc3a5ecb6591f7abb4f62feae8102b9a6468)
- add long dev notice to README [skip ci] [499be77d](https://github.com/leon-ai/leon/commit/499be77d509231b853f591e27f726381da5a50d8)
- move sponsor to new section [skip ci] [8825d687](https://github.com/leon-ai/leon/commit/8825d6877c19d86495e89a858b859b7ab1f9ae37)
- change Twitter handle [skip ci] [c1afc11c](https://github.com/leon-ai/leon/commit/c1afc11cdb283526540d0fecdf83efddf3f3a9f7)
- remove sponsor [skip ci] [99b401a6](https://github.com/leon-ai/leon/commit/99b401a668a6fb248e33c22782940402be7c9b17)
- add new sponsor self-hosted img [skip ci] [238d928c](https://github.com/leon-ai/leon/commit/238d928cace13d4ecd174ca14b136967d8845e0f)
- remove new sponsor link (broken) [skip ci] [254f2848](https://github.com/leon-ai/leon/commit/254f2848aab622b79cce16d10c58d53ff6db9a8f)
- in GitHub BUG.md from modules to skills [4a5480a3](https://github.com/leon-ai/leon/commit/4a5480a3ccc54ee34d42f6edcec2a40224dee7ed)
- change @FluxIndustries sponsorship [skip ci] [1a118b71](https://github.com/leon-ai/leon/commit/1a118b718e5d4ade123756ac94758a01c50b12ae)
- add @FluxIndustries sponsor [skip ci] [9a604d7c](https://github.com/leon-ai/leon/commit/9a604d7ccc0c6aaec257299078141dd0c3077933)
- new #LeonAI link [skip ci] [a0107d62](https://github.com/leon-ai/leon/commit/a0107d629473f7fd057d367926e83822d46f1227)
- changelog new version diff link fix [skip ci] [e14c2498](https://github.com/leon-ai/leon/commit/e14c249826db92af7b85422e566be6aa834a7fb7)
# [1.0.0-beta.6](https://github.com/leon-ai/leon/compare/v1.0.0-beta.5...v1.0.0-beta.6) (2022-02-07) / Leon Over HTTP + Making Friends with Coqui STT
### Features
- simple coqui-ai stt integration [86a4816b](https://github.com/leon-ai/leon/commit/86a4816b777fee8ec9c89648c5866a75de56c017)
- HTTP API key generator [d10a7fa7](https://github.com/leon-ai/leon/commit/d10a7fa7880a0bf2fb1cae7904d1ef4257f05257)
- avoid unnecessary routes generation
- **server:**
- make Coqui STT the default STT solution [70399187](https://github.com/leon-ai/leon/commit/7039918760c0ef7ba93bf45820e3cae774c42d8c)
- add HTTP API key middleware [cdf41499](https://github.com/leon-ai/leon/commit/cdf4149939cbe3f3ae81039957dba3377a78f5a6)
- expose queries over HTTP [b6428d03](https://github.com/leon-ai/leon/commit/b6428d038452619f1682c863892cd8f376efca84)
- add timeout action over HTTP [115f9c16](https://github.com/leon-ai/leon/commit/115f9c164559d761625cc6f362749f7d2417d300)
- handle built-in and trim entities over HTTP + add "disabled" HTTP API action option [82fb967a](https://github.com/leon-ai/leon/commit/82fb967af8f49421e3b2474184da3d34fb17294f)
- execute modules over HTTP [2e5b2c59](https://github.com/leon-ai/leon/commit/2e5b2c59da0bafe3acd966773c6fac3611b3bd0c)
- generate Fastify routes on the file to expose packages over HTTP [5b41713a](https://github.com/leon-ai/leon/commit/5b41713a68ee628e695212dbebc88f6b9a94b461)
### Bug Fixes
- do not ask to regenerate the HTTP API key if this one isn't available yet [d265377a](https://github.com/leon-ai/leon/commit/d265377a43fd4506cf12db46f261b891f2054ed2)
- Python deps tree check [c6c01291](https://github.com/leon-ai/leon/commit/c6c012915824227efdf0c50df6a8f1cd8d70ed42)
- hotword offline (#342) [f563d01d](https://github.com/leon-ai/leon/commit/f563d01d077499c836e94c86f85cedc2ad4d56e6)
- addressed comments by @JRMeyer [b1c6f5c8](https://github.com/leon-ai/leon/commit/b1c6f5c883103d57d4fe566af640fc3ac5ce713d)
- allow to detect STT offline capabilities [04d62288](https://github.com/leon-ai/leon/commit/04d622884165e0bde65785569a659f59cf9e8582)
- Amazon Polly is always configured on check script due to new structure [e6246d1f](https://github.com/leon-ai/leon/commit/e6246d1f8f9ec15a4ebe9600764afffbaa7e62d9)
- simple coqui-ai stt integration [86a4816b](https://github.com/leon-ai/leon/commit/86a4816b777fee8ec9c89648c5866a75de56c017)
- HTTP API key generator [d10a7fa7](https://github.com/leon-ai/leon/commit/d10a7fa7880a0bf2fb1cae7904d1ef4257f05257)
- avoid unnecessary routes generation
- **server:**
- make Coqui STT the default STT solution [70399187](https://github.com/leon-ai/leon/commit/7039918760c0ef7ba93bf45820e3cae774c42d8c)
- add HTTP API key middleware [cdf41499](https://github.com/leon-ai/leon/commit/cdf4149939cbe3f3ae81039957dba3377a78f5a6)
- expose queries over HTTP [b6428d03](https://github.com/leon-ai/leon/commit/b6428d038452619f1682c863892cd8f376efca84)
- add timeout action over HTTP [115f9c16](https://github.com/leon-ai/leon/commit/115f9c164559d761625cc6f362749f7d2417d300)
- handle built-in and trim entities over HTTP + add "disabled" HTTP API action option [82fb967a](https://github.com/leon-ai/leon/commit/82fb967af8f49421e3b2474184da3d34fb17294f)
- execute modules over HTTP [2e5b2c59](https://github.com/leon-ai/leon/commit/2e5b2c59da0bafe3acd966773c6fac3611b3bd0c)
- generate Fastify routes on the file to expose packages over HTTP [5b41713a](https://github.com/leon-ai/leon/commit/5b41713a68ee628e695212dbebc88f6b9a94b461)
### Bug Fixes
- do not ask to regenerate the HTTP API key if this one isn't available yet [d265377a](https://github.com/leon-ai/leon/commit/d265377a43fd4506cf12db46f261b891f2054ed2)
- Python deps tree check [c6c01291](https://github.com/leon-ai/leon/commit/c6c012915824227efdf0c50df6a8f1cd8d70ed42)
- hotword offline (#342) [f563d01d](https://github.com/leon-ai/leon/commit/f563d01d077499c836e94c86f85cedc2ad4d56e6)
- addressed comments by @JRMeyer [b1c6f5c8](https://github.com/leon-ai/leon/commit/b1c6f5c883103d57d4fe566af640fc3ac5ce713d)
- allow to detect STT offline capabilities [04d62288](https://github.com/leon-ai/leon/commit/04d622884165e0bde65785569a659f59cf9e8582)
- Amazon Polly is always configured on check script due to new structure [e6246d1f](https://github.com/leon-ai/leon/commit/e6246d1f8f9ec15a4ebe9600764afffbaa7e62d9)
### Performance Improvements
- check if Python deps tree has been updated before going through deps install [2d0b0f13](https://github.com/leon-ai/leon/commit/2d0b0f1365d8e4d6eadf9f7cc0a16b7b4b4306f4)
- check if Python deps tree has been updated before going through deps install [2d0b0f13](https://github.com/leon-ai/leon/commit/2d0b0f1365d8e4d6eadf9f7cc0a16b7b4b4306f4)
# [1.0.0-beta.5](https://github.com/leon-ai/leon/compare/v1.0.0-beta.4...v1.0.0-beta.5) (2021-12-28) / Refocus
*This release marks a major turn in the future versions of the Leon core. Please [read this blog post](https://blog.getleon.ai/i-ran-away-from-open-source/) to know more.*
_This release marks a major turn in the future versions of the Leon core. Please [read this blog post](https://blog.getleon.ai/i-ran-away-from-open-source/) to know more._
### BREAKING CHANGES
- Node.js 16+ and npm 8+ minimum requirements [2f66f1c1](https://github.com/leon-ai/leon/commit/2f66f1c17bb2e4a1c18b4251d49de252b8d87344)
- Node.js 16+ and npm 8+ minimum requirements [2f66f1c1](https://github.com/leon-ai/leon/commit/2f66f1c17bb2e4a1c18b4251d49de252b8d87344)
### Features
- **server:** support arrays on NER between conditions [7cf7f979](https://github.com/leon-ai/leon/commit/7cf7f9791254e1950fe9128ce1b3a58079cc2ada)
- **server:** support arrays on NER between conditions [7cf7f979](https://github.com/leon-ai/leon/commit/7cf7f9791254e1950fe9128ce1b3a58079cc2ada)
### Bug Fixes
- jest-extended new setup due to latest update [02f766d6](https://github.com/leon-ai/leon/commit/02f766d6a8453609ebaec78356aa6e6d4df0967b)
- jest-extended new setup due to latest update [02f766d6](https://github.com/leon-ai/leon/commit/02f766d6a8453609ebaec78356aa6e6d4df0967b)
### Performance Improvements
- Windows setup on DeepSpeech dep removal [13f5a49f](https://github.com/leon-ai/leon/commit/13f5a49f678f8f67a93b67d4f558cddcf237e204)
- Windows setup on DeepSpeech dep removal [13f5a49f](https://github.com/leon-ai/leon/commit/13f5a49f678f8f67a93b67d4f558cddcf237e204)
### Documentation Changes
- URL redirect managed by registrar [c16d5b28](https://github.com/leon-ai/leon/commit/c16d5b280b758f7e18305e30678adec79f0a0716)
- URL redirect managed by registrar [c16d5b28](https://github.com/leon-ai/leon/commit/c16d5b280b758f7e18305e30678adec79f0a0716)
# [1.0.0-beta.4](https://github.com/leon-ai/leon/compare/1.0.0-beta.2...v1.0.0-beta.4) (2021-05-01) / Getting Rid of Dust
*This release includes a lot of changes that are made under the hood and are not displayed here, please **[read the blog post](https://blog.getleon.ai/getting-rid-of-dust-1-0-0-beta-4/)** to know more.*
_This release includes a lot of changes that are made under the hood and are not displayed here, please **[read the blog post](https://blog.getleon.ai/getting-rid-of-dust-1-0-0-beta-4/)** to know more._
### BREAKING CHANGES
- **package/checker:** introduce Have I Been Pwned v3 API with API key ([0ca89fe3](https://github.com/leon-ai/leon/commit/0ca89fe32d51c80cec5f9446acf14990390a5917))
- **server:**
- AWS SDK new structure due to v3 and adapt Amazon Polly changes ([f15f2db7](https://github.com/leon-ai/leon/commit/f15f2db78e5781d05e5e2bcb186645966d17debf))
- IBM Watson TTS and STT new structure ([f41ea0e9](https://github.com/leon-ai/leon/commit/f41ea0e9a1479bfd6a1cb2e8d1f70aec744c685b) | [2668c295](https://github.com/leon-ai/leon/commit/2668c295880ee753ef7ca26a91dbc7e0901febff))
### Features
- **package/calendar:** introduce To-Do list module ([0cdd73d6](https://github.com/leon-ai/leon/commit/0cdd73d6c24a287915f691e3b12edacd75fd383a) | [857be947](https://github.com/leon-ai/leon/commit/857be947792c650ac35847e14fc41064008cef24) | [2041be14](https://github.com/leon-ai/leon/commit/2041be14dbc01640a61de96d1982cc20cd05a8b3) | [12e8f5c3](https://github.com/leon-ai/leon/commit/12e8f5c3bfb436aa212557cd99d9926aa431ab4f) | [8575e9e3](https://github.com/leon-ai/leon/commit/8575e9e3ef01499d9f7be6d313a85d48549e9107) | [5e128df0](https://github.com/leon-ai/leon/commit/5e128df023977525de3e66ce2826aace87569308) | [602aa694](https://github.com/leon-ai/leon/commit/602aa694ac49333f48c119cf2ca2aa7f54b8ae44) | [b9693df9](https://github.com/leon-ai/leon/commit/b9693df90cbc01067e18e64db4d377e41b3fd1d4) | [581da8cd](https://github.com/leon-ai/leon/commit/581da8cd9806323aabb0e85778d645df3c0948b9) | [53f7db55](https://github.com/leon-ai/leon/commit/53f7db55c6e916751f1d59c239628d5ea8914009) | [ae073971](https://github.com/leon-ai/leon/commit/ae0739717b6a17373d8f9bc69571c67c1c571b4a))
- **package/checker:** introduce Have I Been Pwned module ([61c1b55a](https://github.com/leon-ai/leon/commit/61c1b55af5691c03f6a6dae0cf3f236a374f1fe7) | [5a999bc6](https://github.com/leon-ai/leon/commit/5a999bc63aa0c667c4e3092daac6a05a6c4b4499) | [36368664](https://github.com/leon-ai/leon/commit/36368664fce8bcf0c17c4c83818aeb418f1e2f23) | [a7a6d885](https://github.com/leon-ai/leon/commit/a7a6d885a83455163eeca74a355177d65db156b8) | [c73ba52b](https://github.com/leon-ai/leon/commit/c73ba52ba8575a64b3329e59a50050d15281d0ec) | [8374e548](https://github.com/leon-ai/leon/commit/8374e5481022de9b134f49180a8dfe28db136261) | [a476fd0f](https://github.com/leon-ai/leon/commit/a476fd0f38f18bf8035db213be2c55f83871038d))
- **package/network:** add speedtest module ([09ad4340](https://github.com/leon-ai/leon/commit/09ad43406d3df8ca65f385a91c159def51f91811))
- **server:**
- add regex entity type [3fda3526](https://github.com/leon-ai/leon/commit/3fda3526c7425bdea4b669474fa77efd61c06a8e)
- catch unsupported action entity type [5bc6c3f1](https://github.com/leon-ai/leon/commit/5bc6c3f116d6b9ece2cc3bebdbdb08f019ee90b9)
- NER backbone [24cf3c9a](https://github.com/leon-ai/leon/commit/24cf3c9a4facd05a4c626ff9d2e7c83a5ae15298)
- introduce actions module [b449376f](https://github.com/leon-ai/leon/commit/b449376f61dc995e2e264c6a14ba123926f5cc58)
- **package/checker:** introduce Have I Been Pwned v3 API with API key ([0ca89fe3](https://github.com/leon-ai/leon/commit/0ca89fe32d51c80cec5f9446acf14990390a5917))
- **server:**
- AWS SDK new structure due to v3 and adapt Amazon Polly changes ([f15f2db7](https://github.com/leon-ai/leon/commit/f15f2db78e5781d05e5e2bcb186645966d17debf))
- IBM Watson TTS and STT new structure ([f41ea0e9](https://github.com/leon-ai/leon/commit/f41ea0e9a1479bfd6a1cb2e8d1f70aec744c685b) | [2668c295](https://github.com/leon-ai/leon/commit/2668c295880ee753ef7ca26a91dbc7e0901febff))
### Features
- **package/calendar:** introduce To-Do list module ([0cdd73d6](https://github.com/leon-ai/leon/commit/0cdd73d6c24a287915f691e3b12edacd75fd383a) | [857be947](https://github.com/leon-ai/leon/commit/857be947792c650ac35847e14fc41064008cef24) | [2041be14](https://github.com/leon-ai/leon/commit/2041be14dbc01640a61de96d1982cc20cd05a8b3) | [12e8f5c3](https://github.com/leon-ai/leon/commit/12e8f5c3bfb436aa212557cd99d9926aa431ab4f) | [8575e9e3](https://github.com/leon-ai/leon/commit/8575e9e3ef01499d9f7be6d313a85d48549e9107) | [5e128df0](https://github.com/leon-ai/leon/commit/5e128df023977525de3e66ce2826aace87569308) | [602aa694](https://github.com/leon-ai/leon/commit/602aa694ac49333f48c119cf2ca2aa7f54b8ae44) | [b9693df9](https://github.com/leon-ai/leon/commit/b9693df90cbc01067e18e64db4d377e41b3fd1d4) | [581da8cd](https://github.com/leon-ai/leon/commit/581da8cd9806323aabb0e85778d645df3c0948b9) | [53f7db55](https://github.com/leon-ai/leon/commit/53f7db55c6e916751f1d59c239628d5ea8914009) | [ae073971](https://github.com/leon-ai/leon/commit/ae0739717b6a17373d8f9bc69571c67c1c571b4a))
- **package/checker:** introduce Have I Been Pwned module ([61c1b55a](https://github.com/leon-ai/leon/commit/61c1b55af5691c03f6a6dae0cf3f236a374f1fe7) | [5a999bc6](https://github.com/leon-ai/leon/commit/5a999bc63aa0c667c4e3092daac6a05a6c4b4499) | [36368664](https://github.com/leon-ai/leon/commit/36368664fce8bcf0c17c4c83818aeb418f1e2f23) | [a7a6d885](https://github.com/leon-ai/leon/commit/a7a6d885a83455163eeca74a355177d65db156b8) | [c73ba52b](https://github.com/leon-ai/leon/commit/c73ba52ba8575a64b3329e59a50050d15281d0ec) | [8374e548](https://github.com/leon-ai/leon/commit/8374e5481022de9b134f49180a8dfe28db136261) | [a476fd0f](https://github.com/leon-ai/leon/commit/a476fd0f38f18bf8035db213be2c55f83871038d))
- **package/network:** add speedtest module ([09ad4340](https://github.com/leon-ai/leon/commit/09ad43406d3df8ca65f385a91c159def51f91811))
- **server:**
- add regex entity type [3fda3526](https://github.com/leon-ai/leon/commit/3fda3526c7425bdea4b669474fa77efd61c06a8e)
- catch unsupported action entity type [5bc6c3f1](https://github.com/leon-ai/leon/commit/5bc6c3f116d6b9ece2cc3bebdbdb08f019ee90b9)
- NER backbone [24cf3c9a](https://github.com/leon-ai/leon/commit/24cf3c9a4facd05a4c626ff9d2e7c83a5ae15298)
- introduce actions module [b449376f](https://github.com/leon-ai/leon/commit/b449376f61dc995e2e264c6a14ba123926f5cc58)
### Bug Fixes
- set correct status code for GET /downloads [690f1841](https://github.com/leon-ai/leon/commit/690f1841d681a1e48e1837e3e166228d6c2ddaf6)
- take `.env` in consideration when using Docker [d38e6095](https://github.com/leon-ai/leon/commit/d38e6095f9b71467b8486430fba4bb7007ec4c5a)
- spinner test [9071c927](https://github.com/leon-ai/leon/commit/9071c92790be674687590e4a896bbf44bc26fb43)
- e2e tests by adding modules actions level [5cf77d90](https://github.com/leon-ai/leon/commit/5cf77d9011a80b326f229b2309a6910ac0f1cfa2)
- **package/leon:** fix english translations [90225707](https://github.com/leon-ai/leon/commit/90225707f94154021cadeb9c61bdc48c3de5aa29)
- **package/network:** make use of new compatible speedtest lib [0c925626](https://github.com/leon-ai/leon/commit/0c925626df65858fa039972b3f3d5f38fde93eb6)
- **package/trend:**
- GitHub module new scraping [68414937](https://github.com/leon-ai/leon/commit/6841493740ca859000c1fd8d692b73fc79fcf500)
- when there is no star provided on the GitHub module [563fb409](https://github.com/leon-ai/leon/commit/563fb40955e2deb5c6d0bd064fc9cc8766a6fcaf)
- **server:**
- make use of Basic plugin from the main NLP container [e1d5bed3](https://github.com/leon-ai/leon/commit/e1d5bed3e688db566a0cb803dda5c2d57c599d8c)
- NER trim entity on after conditions [fa6a5a43](https://github.com/leon-ai/leon/commit/fa6a5a43a60b493aa403a44957082382494c129b)
- set correct status code for GET /downloads [690f1841](https://github.com/leon-ai/leon/commit/690f1841d681a1e48e1837e3e166228d6c2ddaf6)
- take `.env` in consideration when using Docker [d38e6095](https://github.com/leon-ai/leon/commit/d38e6095f9b71467b8486430fba4bb7007ec4c5a)
- spinner test [9071c927](https://github.com/leon-ai/leon/commit/9071c92790be674687590e4a896bbf44bc26fb43)
- e2e tests by adding modules actions level [5cf77d90](https://github.com/leon-ai/leon/commit/5cf77d9011a80b326f229b2309a6910ac0f1cfa2)
- **package/leon:** fix english translations [90225707](https://github.com/leon-ai/leon/commit/90225707f94154021cadeb9c61bdc48c3de5aa29)
- **package/network:** make use of new compatible speedtest lib [0c925626](https://github.com/leon-ai/leon/commit/0c925626df65858fa039972b3f3d5f38fde93eb6)
- **package/trend:**
- GitHub module new scraping [68414937](https://github.com/leon-ai/leon/commit/6841493740ca859000c1fd8d692b73fc79fcf500)
- when there is no star provided on the GitHub module [563fb409](https://github.com/leon-ai/leon/commit/563fb40955e2deb5c6d0bd064fc9cc8766a6fcaf)
- **server:**
- make use of Basic plugin from the main NLP container [e1d5bed3](https://github.com/leon-ai/leon/commit/e1d5bed3e688db566a0cb803dda5c2d57c599d8c)
- NER trim entity on after conditions [fa6a5a43](https://github.com/leon-ai/leon/commit/fa6a5a43a60b493aa403a44957082382494c129b)
### Documentation Changes
- add minimum Pipenv version requirement to README [72e46bd6](https://github.com/leon-ai/leon/commit/72e46bd6c175a4a149fb6b14522823b224d7c152)
- hunt broken links [b2a22792](https://github.com/leon-ai/leon/commit/b2a2279243e7566b57fb7f696024bdf08294e853)
- add "ci" commit type in CONTRIBUTING.md [09e2672b](https://github.com/leon-ai/leon/commit/09e2672b0b399f5ce9dd7cd446d04f4d6fd7c13a)
- use emojies in README [0ea7a78b](https://github.com/leon-ai/leon/commit/0ea7a78b7c94dc44c992913ae1c90fb1cf8a7692)
- add social badges to README [c55c7532](https://github.com/leon-ai/leon/commit/c55c7532b25bf420c4819be71b0f9c21ccc58711)
- Node.js 14 requirement [d1034bd1](https://github.com/leon-ai/leon/commit/d1034bd135fd5a6314a1571d4088fd85a8e6a1da)
- add minimum Pipenv version requirement to README [72e46bd6](https://github.com/leon-ai/leon/commit/72e46bd6c175a4a149fb6b14522823b224d7c152)
- hunt broken links [b2a22792](https://github.com/leon-ai/leon/commit/b2a2279243e7566b57fb7f696024bdf08294e853)
- add "ci" commit type in CONTRIBUTING.md [09e2672b](https://github.com/leon-ai/leon/commit/09e2672b0b399f5ce9dd7cd446d04f4d6fd7c13a)
- use emojies in README [0ea7a78b](https://github.com/leon-ai/leon/commit/0ea7a78b7c94dc44c992913ae1c90fb1cf8a7692)
- add social badges to README [c55c7532](https://github.com/leon-ai/leon/commit/c55c7532b25bf420c4819be71b0f9c21ccc58711)
- Node.js 14 requirement [d1034bd1](https://github.com/leon-ai/leon/commit/d1034bd135fd5a6314a1571d4088fd85a8e6a1da)
# [1.0.0-beta.2](https://github.com/leon-ai/leon/compare/1.0.0-beta.1...1.0.0-beta.2) (2019-04-24)
### Features
- can send custom HTTP headers
- can send custom HTTP headers
([2685cdab](https://github.com/leon-ai/leon/commit/2685cdab07cc1a9ea418eab812e5163d2dd0da90))
- allow HTML output
- allow HTML output
([ec3f02df](https://github.com/leon-ai/leon/commit/ec3f02dfaf2f4b7623ce350350ebee28cf18740e))
- NLU improvement with node-nlp
- NLU improvement with node-nlp
([6585db71](https://github.com/leon-ai/leon/commit/6585db718ccae1d750a35783075cf61cc8fe84f1))
- **package/trend:**
- add answer when the Product Hunt developer token is not provided
([f40b479b](https://github.com/leon-ai/leon/commit/f40b479b295247c5a8a0e6ed81afe56fadfd2730))
- Product Hunt module done
([37794306](https://github.com/leon-ai/leon/commit/3779430621bef970be0e8d048eb0b4bf160ae8a4))
- basics done on the Product Hunt module
([32cc7dbe](https://github.com/leon-ai/leon/commit/32cc7dbe36592fb9618d9c10da5f05a4be7e41b6))
- complete dedicated answers according to the technology and given time
([8997d691](https://github.com/leon-ai/leon/commit/8997d6917445f837c9647a5a9b4d6998d2df4952))
- GitHub module done
([7c6f3922](https://github.com/leon-ai/leon/commit/7c6f3922f299193ee0fb54d0fc97f8b436fc706b))
- be able to choose a limit and a date range for the GitHub module
([3c088371](https://github.com/leon-ai/leon/commit/3c0883716e1c10371c399843a578095a1e16781d))
- format GitHub results in one message
([9d026b94](https://github.com/leon-ai/leon/commit/9d026b94efa8871d421ae2b593b96622a98537ac))
- simple GitHub module results
([5baec074](https://github.com/leon-ai/leon/commit/5baec07455f453d4ad003f1da360b2663b7e15e0))
- list GitHub trends in HTML raw
([3441629e](https://github.com/leon-ai/leon/commit/3441629e3cde933b322cb114d9f1bc3ef0eb3944) | [6b932e94](https://github.com/leon-ai/leon/commit/6b932e947fc365ea6435fda798b7cca32708b443))
- expressions dataset and structure
([f406a5a0](https://github.com/leon-ai/leon/commit/f406a5a09894e12c56a1e76dda609adada00b0d7) | [f54c2272](https://github.com/leon-ai/leon/commit/f54c2272b4b4dc5c56b512b0ccc1519d77ef15a3))
- **package/trend:**
- add answer when the Product Hunt developer token is not provided
([f40b479b](https://github.com/leon-ai/leon/commit/f40b479b295247c5a8a0e6ed81afe56fadfd2730))
- Product Hunt module done
([37794306](https://github.com/leon-ai/leon/commit/3779430621bef970be0e8d048eb0b4bf160ae8a4))
- basics done on the Product Hunt module
([32cc7dbe](https://github.com/leon-ai/leon/commit/32cc7dbe36592fb9618d9c10da5f05a4be7e41b6))
- complete dedicated answers according to the technology and given time
([8997d691](https://github.com/leon-ai/leon/commit/8997d6917445f837c9647a5a9b4d6998d2df4952))
- GitHub module done
([7c6f3922](https://github.com/leon-ai/leon/commit/7c6f3922f299193ee0fb54d0fc97f8b436fc706b))
- be able to choose a limit and a date range for the GitHub module
([3c088371](https://github.com/leon-ai/leon/commit/3c0883716e1c10371c399843a578095a1e16781d))
- format GitHub results in one message
([9d026b94](https://github.com/leon-ai/leon/commit/9d026b94efa8871d421ae2b593b96622a98537ac))
- simple GitHub module results
([5baec074](https://github.com/leon-ai/leon/commit/5baec07455f453d4ad003f1da360b2663b7e15e0))
- list GitHub trends in HTML raw
([3441629e](https://github.com/leon-ai/leon/commit/3441629e3cde933b322cb114d9f1bc3ef0eb3944) | [6b932e94](https://github.com/leon-ai/leon/commit/6b932e947fc365ea6435fda798b7cca32708b443))
- expressions dataset and structure
([f406a5a0](https://github.com/leon-ai/leon/commit/f406a5a09894e12c56a1e76dda609adada00b0d7) | [f54c2272](https://github.com/leon-ai/leon/commit/f54c2272b4b4dc5c56b512b0ccc1519d77ef15a3))
### Bug Fixes
- Leon was not fully installed with Docker if a `.env` file was existing
- Leon was not fully installed with Docker if a `.env` file was existing
([c8a68ab0](https://github.com/leon-ai/leon/commit/c8a68ab02eec9ddaf803b6e36cd7e91a4989cdea))
- **package/trend:**
- **package/trend:**
when there is no contributor on GitHub module
([d845e49b](https://github.com/leon-ai/leon/commit/d845e49b0f18caeb306e2d399c50a03883b2f55d))
- **server:**
- skip Pipenv locking until they fix it
([029381e3](https://github.com/leon-ai/leon/commit/029381e3256933f37f5c2950c4eb1f0192f55ec6) | [ecfdc73f](https://github.com/leon-ai/leon/commit/ecfdc73f8290dd9e1910df9519095516a1227763))
- **server:**
- skip Pipenv locking until they fix it
([029381e3](https://github.com/leon-ai/leon/commit/029381e3256933f37f5c2950c4eb1f0192f55ec6) | [ecfdc73f](https://github.com/leon-ai/leon/commit/ecfdc73f8290dd9e1910df9519095516a1227763))
### Documentation Changes
- add `What is Leon able to do?` section in the readme
- add `What is Leon able to do?` section in the readme
([87f53c91](https://github.com/leon-ai/leon/commit/87f53c91368141966959f3ad7299bb7b643828a5) | [d558fc8b](https://github.com/leon-ai/leon/commit/d558fc8b7c6494babf5dec799802227f77c33d8a))
- open-source != open source
- open-source != open source
([16a9372e](https://github.com/leon-ai/leon/commit/16a9372e05d4d31a7a39a65a52d4708b72499d4c) | [2155cd88](https://github.com/leon-ai/leon/commit/2155cd88decbbd671bd58840291d9330ce06ebba))
# [1.0.0-beta.1](https://github.com/leon-ai/leon/compare/1.0.0-beta.0...1.0.0-beta.1) (2019-02-24)
### Features
- add Docker support
- add Docker support
([209760db](https://github.com/leon-ai/leon/commit/209760dba747001300692fb6a6af97543de584d6))
### Bug Fixes
- **package/checker:**
- **package/checker:**
isitdown module fails with capital letters in URL
([ada6aaef](https://github.com/leon-ai/leon/commit/ada6aaef4bada47e87d28f9f6eaa05b9e23f58d2))
- **web app:**
- **web app:**
enable environment variables
([a438d6f9](https://github.com/leon-ai/leon/commit/a438d6f942812f74e3dda75a9875609f8bea21cd))
### Performance Improvements
- **web app:**
- **web app:**
favicon compression
([33dbcb42](https://github.com/leon-ai/leon/commit/33dbcb425eaafba90176ff64e5f689eb36bc6ce1))
### Documentation Changes
- update README to make the reader genderless
- update README to make the reader genderless
([58662658](https://github.com/leon-ai/leon/commit/586626586b7a2f84cb2cd84028111976bc5172f0))
- use "to rule them all" in README
- use "to rule them all" in README
([c74dda4c](https://github.com/leon-ai/leon/commit/c74dda4cb9acc78de143ae01fdc6b4ef0a5ec3ef))
- **readme:**
- **readme:**
add story write-up
([08a68e37](https://github.com/leon-ai/leon/commit/08a68e376b6a9367425947380564120943376500))
# [1.0.0-beta.0](https://github.com/leon-ai/leon/compare/https://github.com/leon-ai/leon.git...1.0.0-beta.0) (2019-02-10)
Initial release.

View File

@ -7,7 +7,7 @@
Leon
</h1>
*<p align="center">Your open-source personal assistant.</p>*
_<p align="center">Your open-source personal assistant.</p>_
<p align="center">
<a href="https://github.com/leon-ai/leon/blob/develop/LICENSE.md"><img src="https://img.shields.io/badge/license-MIT-blue.svg?label=License&style=flat" /></a>
@ -52,10 +52,10 @@ If you want to, Leon can communicate with you by being **offline to protect your
### Why?
> 1. If you are a developer (or not), you may want to build many things that could help in your daily life.
> Instead of building a dedicated project for each of those ideas, Leon can help you with his
> Skills structure.
> Instead of building a dedicated project for each of those ideas, Leon can help you with his
> Skills structure.
> 2. With this generic structure, everyone can create their own skills and share them with others.
> Therefore there is only one core (to rule them all).
> Therefore there is only one core (to rule them all).
> 3. Leon uses AI concepts, which is cool.
> 4. Privacy matters, you can configure Leon to talk with him offline. You can already text with him without any third party services.
> 5. Open source is great.
@ -63,6 +63,7 @@ If you want to, Leon can communicate with you by being **offline to protect your
### What is this repository for?
> This repository contains the following nodes of Leon:
>
> - The server
> - Skills
> - The web app
@ -157,7 +158,7 @@ You'll find a write-up on this [blog post](https://blog.getleon.ai/the-story-beh
- [Blog](https://blog.getleon.ai)
- [GitHub issues](https://github.com/leon-ai/leon/issues)
- [YouTube](https://www.youtube.com/channel/UCW6mk6j6nQUzFYY97r47emQ)
- [#LeonAI](https://twitter.com/search?f=live&q=%23LeonAI%20(from%3Agrenlouis%20OR%20from%3Alouistiti_fr)&src=typed_query)
- [#LeonAI](<https://twitter.com/search?f=live&q=%23LeonAI%20(from%3Agrenlouis%20OR%20from%3Alouistiti_fr)&src=typed_query>)
## 👨 Author
@ -216,9 +217,11 @@ The focus is not only limited to the activity you see on GitHub but also a lot o
</a>
## 📝 License
[MIT License](https://github.com/leon-ai/leon/blob/develop/LICENSE.md)
Copyright (c) 2019-present, Louis Grenard <louis@getleon.ai>
## Cheers!
![Cheers!](https://github.githubassets.com/images/icons/emoji/unicode/1f379.png "Cheers!")
![Cheers!](https://github.githubassets.com/images/icons/emoji/unicode/1f379.png 'Cheers!')

View File

@ -1,18 +1,84 @@
@import url(https://fonts.googleapis.com/css?family=Open+Sans:400,600,700,800);
html, body, div, span, applet, object, iframes,
h1, h2, h3, h4, h5, h6, p, blockquote, pre,
a, abbr, acronym, address, big, cite, code,
del, dfn, em, img, ins, kbd, q, s, samp,
small, strike, sub, sup, tt, var,
u, i, center,
dl, dt, dd, ol, ul, li,
fieldset, form, label, legend,
table, caption, tbody, tfoot, thead, tr, th, td,
article, aside, canvas, details, embed,
figure, figcaption, footer, header, hgroup,
menu, nav, output, ruby, section, summary,
time, mark, audio, video {
html,
body,
div,
span,
applet,
object,
iframes,
h1,
h2,
h3,
h4,
h5,
h6,
p,
blockquote,
pre,
a,
abbr,
acronym,
address,
big,
cite,
code,
del,
dfn,
em,
img,
ins,
kbd,
q,
s,
samp,
small,
strike,
sub,
sup,
tt,
var,
u,
i,
center,
dl,
dt,
dd,
ol,
ul,
li,
fieldset,
form,
label,
legend,
table,
caption,
tbody,
tfoot,
thead,
tr,
th,
td,
article,
aside,
canvas,
details,
embed,
figure,
figcaption,
footer,
header,
hgroup,
menu,
nav,
output,
ruby,
section,
summary,
time,
mark,
audio,
video {
margin: 0;
padding: 0;
border: 0;
@ -21,17 +87,29 @@ time, mark, audio, video {
vertical-align: baseline;
}
article, aside, details, figcaption, figure,
footer, header, hgroup, menu, nav, section {
article,
aside,
details,
figcaption,
figure,
footer,
header,
hgroup,
menu,
nav,
section {
display: block;
}
blockquote, q {
blockquote,
q {
quotes: none;
}
blockquote:before, blockquote:after,
q:before, q:after {
blockquote:before,
blockquote:after,
q:before,
q:after {
content: '';
content: none;
}
@ -48,7 +126,7 @@ table {
:root {
--black-color: #151718;
--white-color: #FFF;
--white-color: #fff;
}
a {
@ -66,7 +144,7 @@ body {
font-weight: 400;
}
body > * {
transition: opacity .5s;
transition: opacity 0.5s;
}
body.settingup > * {
opacity: 0;
@ -81,14 +159,14 @@ body.settingup::after {
left: 50%;
transform: translate(-50%, -50%);
border-radius: 50%;
animation: scaleout .6s infinite ease-in-out;
animation: scaleout 0.6s infinite ease-in-out;
}
@keyframes scaleout {
0% {
transform: scale(0);
}
100% {
transform: scale(1.0);
transform: scale(1);
opacity: 0;
}
}
@ -124,7 +202,7 @@ input {
small {
color: var(--white-color);
font-size: .7em;
font-size: 0.7em;
}
.hide {
@ -151,7 +229,7 @@ small {
width: 6px;
}
#feed::-webkit-scrollbar-thumb {
background-color: rgba(255, 255, 255, .2);
background-color: rgba(255, 255, 255, 0.2);
border-radius: 12px;
}
@ -166,7 +244,7 @@ small {
padding: 0 8px;
opacity: 0;
margin-top: 20px;
transition: opacity .3s;
transition: opacity 0.3s;
}
#is-typing.on {
opacity: 1;
@ -181,16 +259,16 @@ small {
transform: scale(1);
}
#is-typing .circle:nth-child(1) {
animation: typing .2s linear infinite alternate;
background-color: #0071F0;
animation: typing 0.2s linear infinite alternate;
background-color: #0071f0;
}
#is-typing .circle:nth-child(2) {
animation: typing .2s .2s linear infinite alternate;
animation: typing 0.2s 0.2s linear infinite alternate;
background-color: var(--white-color);
}
#is-typing .circle:nth-child(3) {
animation: typing .2s linear infinite alternate;
background-color: #EC297A;
animation: typing 0.2s linear infinite alternate;
background-color: #ec297a;
}
@keyframes typing {
100% {
@ -216,15 +294,15 @@ small {
word-break: break-word;
text-align: left;
opacity: 0;
animation: fadeIn .2s ease-in forwards;
animation: fadeIn 0.2s ease-in forwards;
}
#feed .me .bubble {
background-color: #1C75DB;
background-color: #1c75db;
color: var(--white-color);
right: 0;
}
#feed .leon .bubble {
background-color: #EEE;
background-color: #eee;
color: var(--black-color);
}
@keyframes fadeIn {
@ -251,7 +329,7 @@ small {
padding: 2px 8px;
font-size: inherit;
cursor: pointer;
transition: background-color .2s, color .2s;
transition: background-color 0.2s, color 0.2s;
}
.suggestion:hover {
color: var(--black-color);
@ -284,7 +362,7 @@ small {
background-color: #888;
-webkit-mask-image: url(../img/mic.svg);
mask-image: url(../img/mic.svg);
transition: background-color .2s;
transition: background-color 0.2s;
}
#mic-button:not(.enabled) {
margin-left: -26px;
@ -293,13 +371,13 @@ small {
background-color: var(--white-color);
}
#mic-button.enabled {
background-color: #00E676;
background-color: #00e676;
}
#mic-button.enabled + #sonar {
width: 26px;
height: 26px;
border-radius: 50%;
opacity: .3;
opacity: 0.3;
background-color: #575757;
pointer-events: none;
animation: sonar 1.3s linear infinite;

View File

@ -1,50 +1,52 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="stylesheet" href="/css/style.css" />
<link rel="icon" type="image/png" href="/img/favicon.png" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Leon</title>
</head>
<body class="settingup">
<main>
<div id="feed">
<p id="no-bubble" class="hide">
You can start to interact with Leon, don't be shy.
</p>
</div>
<div id="suggestions-container"></div>
<div id="is-typing">
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
</div>
<div id="input-container">
<div id="mic-container">
<button id="mic-button"></button>
<div id="sonar"></div>
<head>
<meta charset="utf-8" />
<link rel="stylesheet" href="/css/style.css" />
<link rel="icon" type="image/png" href="/img/favicon.png" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Leon</title>
</head>
<body class="settingup">
<main>
<div id="feed">
<p id="no-bubble" class="hide">
You can start to interact with Leon, don't be shy.
</p>
</div>
<label for="utterance"></label>
<input type="text" id="utterance" autocomplete="off" autofocus>
<small>
Use <kbd></kbd> <kbd></kbd> to browse history;
<kbd></kbd> to submit;
<kbd>alt + c to listen.</kbd>
</small>
</div>
</main>
<footer>
<div id="logo"></div>
<div id="version">
<small>v</small>
</div>
<div id="discord">
<small class="italic">
<a href="https://discord.gg/MNQqqKg" target="_blank">Join us on Discord</a>
</small>
</div>
</footer>
<script type="module" src="/js/main.js"></script>
</body>
<div id="suggestions-container"></div>
<div id="is-typing">
<div class="circle"></div>
<div class="circle"></div>
<div class="circle"></div>
</div>
<div id="input-container">
<div id="mic-container">
<button id="mic-button"></button>
<div id="sonar"></div>
</div>
<label for="utterance"></label>
<input type="text" id="utterance" autocomplete="off" autofocus />
<small>
Use <kbd></kbd> <kbd></kbd> to browse history; <kbd></kbd> to
submit;
<kbd>alt + c to listen.</kbd>
</small>
</div>
</main>
<footer>
<div id="logo"></div>
<div id="version">
<small>v</small>
</div>
<div id="discord">
<small class="italic">
<a href="https://discord.gg/MNQqqKg" target="_blank"
>Join us on Discord</a
>
</small>
</div>
</footer>
<script type="module" src="/js/main.js"></script>
</body>
</html>

View File

@ -1,5 +1,5 @@
export default class Chatbot {
constructor () {
constructor() {
this.et = new EventTarget()
this.feed = document.querySelector('#feed')
this.typing = document.querySelector('#is-typing')
@ -8,7 +8,7 @@ export default class Chatbot {
this.parsedBubbles = JSON.parse(this.bubbles)
}
async init () {
async init() {
await this.loadFeed()
this.scrollDown()
@ -21,19 +21,19 @@ export default class Chatbot {
})
}
sendTo (who, string) {
sendTo(who, string) {
if (who === 'leon') {
this.et.dispatchEvent(new CustomEvent('to-leon', { detail: string }))
}
}
receivedFrom (who, string) {
receivedFrom(who, string) {
if (who === 'leon') {
this.et.dispatchEvent(new CustomEvent('me-received', { detail: string }))
}
}
isTyping (who, value) {
isTyping(who, value) {
if (who === 'leon') {
if (value) {
this.enableTyping()
@ -43,23 +43,23 @@ export default class Chatbot {
}
}
enableTyping () {
enableTyping() {
if (!this.typing.classList.contains('on')) {
this.typing.classList.add('on')
}
}
disableTyping () {
disableTyping() {
if (this.typing.classList.contains('on')) {
this.typing.classList.remove('on')
}
}
scrollDown () {
scrollDown() {
this.feed.scrollTo(0, this.feed.scrollHeight)
}
loadFeed () {
loadFeed() {
return new Promise((resolve) => {
if (this.parsedBubbles === null || this.parsedBubbles.length === 0) {
this.noBubbleMessage.classList.remove('hide')
@ -72,7 +72,7 @@ export default class Chatbot {
this.createBubble(bubble.who, bubble.string, false)
if ((i + 1) === this.parsedBubbles.length) {
if (i + 1 === this.parsedBubbles.length) {
setTimeout(() => {
resolve()
}, 100)
@ -82,7 +82,7 @@ export default class Chatbot {
})
}
createBubble (who, string, save = true) {
createBubble(who, string, save = true) {
const container = document.createElement('div')
const bubble = document.createElement('p')
@ -97,7 +97,7 @@ export default class Chatbot {
}
}
saveBubble (who, string) {
saveBubble(who, string) {
if (!this.noBubbleMessage.classList.contains('hide')) {
this.noBubbleMessage.classList.add('hide')
}

View File

@ -2,7 +2,7 @@ import { io } from 'socket.io-client'
import Chatbot from './chatbot'
export default class Client {
constructor (client, serverUrl, input, res) {
constructor(client, serverUrl, input, res) {
this.client = client
this._input = input
this._suggestionContainer = document.querySelector('#suggestions-container')
@ -12,25 +12,25 @@ export default class Client {
this.parsedHistory = []
this.info = res
this.chatbot = new Chatbot()
this._recorder = { }
this._recorder = {}
this._suggestions = []
}
set input (newInput) {
set input(newInput) {
if (typeof newInput !== 'undefined') {
this._input.value = newInput
}
}
set recorder (recorder) {
set recorder(recorder) {
this._recorder = recorder
}
get recorder () {
get recorder() {
return this._recorder
}
init (loader) {
init(loader) {
this.chatbot.init()
this.socket.on('connect', () => {
@ -116,9 +116,12 @@ export default class Client {
}
}
send (keyword) {
send(keyword) {
if (this._input.value !== '') {
this.socket.emit(keyword, { client: this.client, value: this._input.value.trim() })
this.socket.emit(keyword, {
client: this.client,
value: this._input.value.trim()
})
this.chatbot.sendTo('leon', this._input.value)
this._suggestions.forEach((suggestion) => {
@ -135,7 +138,7 @@ export default class Client {
return false
}
save () {
save() {
let val = this._input.value
if (localStorage.getItem('history') === null) {
@ -157,7 +160,7 @@ export default class Client {
this._input.value = ''
}
addSuggestion (text) {
addSuggestion(text) {
const newSuggestion = document.createElement('button')
newSuggestion.classList.add('suggestion')
newSuggestion.textContent = text

View File

@ -1,7 +1,12 @@
const listener = { }
const listener = {}
listener.listening = (stream, minDecibels, maxBlankTime,
cbOnStart, cbOnEnd) => {
listener.listening = (
stream,
minDecibels,
maxBlankTime,
cbOnStart,
cbOnEnd
) => {
const ctx = new AudioContext()
const analyser = ctx.createAnalyser()
const streamNode = ctx.createMediaStreamSource(stream)
@ -26,7 +31,7 @@ listener.listening = (stream, minDecibels, maxBlankTime,
silenceStart = time
}
if (!triggered && (time - silenceStart) > maxBlankTime) {
if (!triggered && time - silenceStart > maxBlankTime) {
cbOnEnd()
triggered = true

View File

@ -1,5 +1,5 @@
export default class Loader {
constructor () {
constructor() {
this.et = new EventTarget()
this.body = document.querySelector('body')
@ -12,11 +12,11 @@ export default class Loader {
})
}
start () {
start() {
this.et.dispatchEvent(new CustomEvent('settingup', { detail: true }))
}
stop () {
stop() {
this.et.dispatchEvent(new CustomEvent('settingup', { detail: false }))
}
}

View File

@ -13,31 +13,35 @@ const config = {
min_decibels: -40, // Noise detection sensitivity
max_blank_time: 1000 // Maximum time to consider a blank (ms)
}
const serverUrl = import.meta.env.VITE_LEON_NODE_ENV === 'production' ? '' : `${config.server_host}:${config.server_port}`
const serverUrl =
import.meta.env.VITE_LEON_NODE_ENV === 'production'
? ''
: `${config.server_host}:${config.server_port}`
document.addEventListener('DOMContentLoaded', () => {
const loader = new Loader()
loader.start()
request.get(`${serverUrl}/api/v1/info`)
.end((err, res) => {
if (err || !res.ok) {
console.error(err.response.error.message)
} else {
const input = document.querySelector('#utterance')
const mic = document.querySelector('#mic-button')
const v = document.querySelector('#version small')
const client = new Client(config.app, serverUrl, input, res.body)
let rec = { }
let chunks = []
request.get(`${serverUrl}/api/v1/info`).end((err, res) => {
if (err || !res.ok) {
console.error(err.response.error.message)
} else {
const input = document.querySelector('#utterance')
const mic = document.querySelector('#mic-button')
const v = document.querySelector('#version small')
const client = new Client(config.app, serverUrl, input, res.body)
let rec = {}
let chunks = []
v.innerHTML += client.info.version
v.innerHTML += client.info.version
client.init(loader)
client.init(loader)
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ audio: true }).then((stream) => {
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices
.getUserMedia({ audio: true })
.then((stream) => {
if (MediaRecorder) {
rec = new Recorder(stream, mic, client.info)
client.recorder = rec
@ -46,7 +50,9 @@ document.addEventListener('DOMContentLoaded', () => {
chunks.push(e.data)
})
rec.onstart(() => { /* */ })
rec.onstart(() => {
/* */
})
rec.onstop(() => {
const blob = new Blob(chunks)
@ -59,58 +65,55 @@ document.addEventListener('DOMContentLoaded', () => {
}
})
listener.listening(stream, config.min_decibels, config.max_blank_time, () => {
// Noise detected
rec.noiseDetected = true
}, () => {
// Noise ended
listener.listening(
stream,
config.min_decibels,
config.max_blank_time,
() => {
// Noise detected
rec.noiseDetected = true
},
() => {
// Noise ended
rec.noiseDetected = false
if (rec.enabled && !rec.hotwordTriggered) {
rec.stop()
rec.enabled = false
rec.hotwordTriggered = false
rec.countSilenceAfterTalk = 0
rec.noiseDetected = false
if (rec.enabled && !rec.hotwordTriggered) {
rec.stop()
rec.enabled = false
rec.hotwordTriggered = false
rec.countSilenceAfterTalk = 0
}
}
})
)
client.socket.on('enable-record', () => {
rec.hotwordTriggered = true
rec.start()
setTimeout(() => { rec.hotwordTriggered = false }, config.max_blank_time)
setTimeout(() => {
rec.hotwordTriggered = false
}, config.max_blank_time)
rec.enabled = true
})
} else {
console.error('MediaRecorder is not supported on your browser.')
}
}).catch((err) => {
console.error('MediaDevices.getUserMedia() threw the following error:', err)
})
} else {
console.error('MediaDevices.getUserMedia() is not supported on your browser.')
}
document.addEventListener('keydown', (e) => {
onkeydowndocument(e, () => {
if (rec.enabled === false) {
input.value = ''
rec.start()
rec.enabled = true
} else {
rec.stop()
rec.enabled = false
}
.catch((err) => {
console.error(
'MediaDevices.getUserMedia() threw the following error:',
err
)
})
})
input.addEventListener('keydown', (e) => {
onkeydowninput(e, client)
})
mic.addEventListener('click', (e) => {
e.preventDefault()
} else {
console.error(
'MediaDevices.getUserMedia() is not supported on your browser.'
)
}
document.addEventListener('keydown', (e) => {
onkeydowndocument(e, () => {
if (rec.enabled === false) {
input.value = ''
rec.start()
rec.enabled = true
} else {
@ -118,6 +121,23 @@ document.addEventListener('DOMContentLoaded', () => {
rec.enabled = false
}
})
}
})
})
input.addEventListener('keydown', (e) => {
onkeydowninput(e, client)
})
mic.addEventListener('click', (e) => {
e.preventDefault()
if (rec.enabled === false) {
rec.start()
rec.enabled = true
} else {
rec.stop()
rec.enabled = false
}
})
}
})
})

View File

@ -14,13 +14,13 @@ const onkeydowninput = (e, client) => {
index = -1
}
} else if (localStorage.getItem('history') !== null) {
if (key === 38 && index < (parsedHistory.length - 1)) {
if (key === 38 && index < parsedHistory.length - 1) {
index += 1
client.input = parsedHistory[index]
} else if (key === 40 && (index - 1) >= 0) {
} else if (key === 40 && index - 1 >= 0) {
index -= 1
client.input = parsedHistory[index]
} else if (key === 40 && (index - 1) < 0) {
} else if (key === 40 && index - 1 < 0) {
client.input = ''
index = -1
}
@ -33,7 +33,4 @@ const onkeydowndocument = (e, cb) => {
}
}
export {
onkeydowninput,
onkeydowndocument
}
export { onkeydowninput, onkeydowndocument }

View File

@ -2,7 +2,7 @@ import on from '../sounds/on.mp3'
import off from '../sounds/off.mp3'
export default class Recorder {
constructor (stream, el, info) {
constructor(stream, el, info) {
this.recorder = new MediaRecorder(stream, { audioBitsPerSecond: 16000 })
this.el = el
this.audioOn = new Audio(on)
@ -15,7 +15,7 @@ export default class Recorder {
this.countSilenceAfterTalk = 0
}
start (playSound = true) {
start(playSound = true) {
if (this.info.stt.enabled === false) {
console.warn('Speech-to-text disabled')
} else {
@ -24,7 +24,7 @@ export default class Recorder {
}
}
stop (playSound = true) {
stop(playSound = true) {
if (this.info.stt.enabled === false) {
console.warn('Speech-to-text disabled')
} else {
@ -33,7 +33,7 @@ export default class Recorder {
}
}
onstart (cb) {
onstart(cb) {
this.recorder.onstart = (e) => {
if (this.playSound) {
this.audioOn.play()
@ -44,7 +44,7 @@ export default class Recorder {
}
}
onstop (cb) {
onstop(cb) {
this.recorder.onstop = (e) => {
if (this.playSound) {
this.audioOff.play()
@ -55,7 +55,7 @@ export default class Recorder {
}
}
ondataavailable (cb) {
ondataavailable(cb) {
this.recorder.ondataavailable = (e) => {
cb(e)
}

File diff suppressed because it is too large Load Diff

View File

@ -1,45 +1,43 @@
{
"answers": {
"success": {
},
"errors": {
"not_found": "Sorry, it seems I cannot find that",
"nlu": "It might come from my natural language understanding, the error returned is: \"%error%\""
},
"answers": {
"success": {},
"errors": {
"not_found": "Sorry, it seems I cannot find that",
"nlu": "It might come from my natural language understanding, the error returned is: \"%error%\""
},
"synchronizer": {
"syncing_direct": "I will now synchronize the downloaded content on your current device. Don't worry, I will let you know once I'm done",
"synced_direct": "The new content has been synchronized on your device",
"syncing_google_drive": "I will now synchronize the downloaded content on Google Drive. Don't worry, I will let you know once I'm done",
"synced_google_drive": "The new content is now available on Google Drive"
},
"random_errors": [
"Sorry, there is a problem with my system",
"Sorry, I don't work correctly",
"Sorry, you need to fix me",
"Sorry, I cannot do that because I'm broken"
],
"random_skill_errors": [
"Sorry, it seems I have a problem with the \"%skill_name%\" skill from the \"%domain_name%\" domain",
"Sorry, I have an issue with the \"%skill_name%\" skill from the \"%domain_name%\" domain",
"Sorry, I've got an error with the \"%skill_name%\" skill from the \"%domain_name%\" domain",
"Sorry, the \"%skill_name%\" skill from the \"%domain_name%\" domain is broken"
],
"random_unknown_intents": [
"Sorry, I still don't know this, but you can help me to understand by <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">creating a pull request</a>",
"Sorry, you should teach me this request. You can teach me by <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">creating a pull request</a>",
"Sorry, I cannot answer that. Let me answer you in the future by <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">creating a pull request</a>",
"Sorry, you have to educate me more. You can help me with that by <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">contributing to my code</a>",
"Sorry, I don't understand your query",
"random_errors": [
"Sorry, there is a problem with my system",
"Sorry, I don't work correctly",
"Sorry, you need to fix me",
"Sorry, I cannot do that because I'm broken"
],
"random_skill_errors": [
"Sorry, it seems I have a problem with the \"%skill_name%\" skill from the \"%domain_name%\" domain",
"Sorry, I have an issue with the \"%skill_name%\" skill from the \"%domain_name%\" domain",
"Sorry, I've got an error with the \"%skill_name%\" skill from the \"%domain_name%\" domain",
"Sorry, the \"%skill_name%\" skill from the \"%domain_name%\" domain is broken"
],
"random_unknown_intents": [
"Sorry, I still don't know this, but you can help me to understand by <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">creating a pull request</a>",
"Sorry, you should teach me this request. You can teach me by <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">creating a pull request</a>",
"Sorry, I cannot answer that. Let me answer you in the future by <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">creating a pull request</a>",
"Sorry, you have to educate me more. You can help me with that by <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">contributing to my code</a>",
"Sorry, I don't understand your query",
"Sorry, I'm still very young, I didn't get your point"
],
"random_not_sure": [
"Sorry, you may repeat in an another way",
"Sorry, I'm not sure I understood correctly",
"Sorry, I'm not sure for what you asked, please repeat with a different way",
"Sorry, please repeat again by formulating differently",
"Sorry, I didn't correctly clean my ears today! Oh wait, I'm your personal assistant then please try again with a new way"
],
],
"random_not_sure": [
"Sorry, you may repeat in an another way",
"Sorry, I'm not sure I understood correctly",
"Sorry, I'm not sure for what you asked, please repeat with a different way",
"Sorry, please repeat again by formulating differently",
"Sorry, I didn't correctly clean my ears today! Oh wait, I'm your personal assistant then please try again with a new way"
],
"random_not_able": [
"Sorry, I'm not able to answer. I understand what you said, but please repeat in another way",
"Sorry, I have a blackout, I cannot answer that. I understand what you said, but try to repeat in another way"
@ -63,5 +61,5 @@
"Aah, you want to change the subject, sure",
"Mmmh, as you wish, let's switch conversation"
]
}
}
}

View File

@ -1,45 +1,43 @@
{
"answers": {
"success": {
},
"errors": {
"not_found": "Désolé, il semblerait que je n'arrive pas à trouver ça",
"nlu": "L'erreur semble provenir de ma compréhension de langage naturel. Voici plus de détails au sujet de cette dernière : \"%error%\""
},
"answers": {
"success": {},
"errors": {
"not_found": "Désolé, il semblerait que je n'arrive pas à trouver ça",
"nlu": "L'erreur semble provenir de ma compréhension de langage naturel. Voici plus de détails au sujet de cette dernière : \"%error%\""
},
"synchronizer": {
"syncing_direct": "Je vais maintenant synchroniser le contenu téléchargé sur votre appareil actuel. Ne vous inquiétez pas, je vous préviendrai lorsque j'aurai terminé",
"synced_direct": "Le nouveau contenu a été synchronisé sur votre appareil",
"syncing_google_drive": "Je vais maintenant synchroniser le contenu téléchargé sur Google Drive. Ne vous inquiétez pas, je vous préviendrai lorsque j'aurai terminé",
"synced_google_drive": "Le nouveau contenu est maintenant disponible sur Google Drive"
},
"random_errors": [
"Désolé, il y a un problème avec mon système",
"Désolé, je ne fonctionne pas correctement",
"Désolé, vous devez me réparer",
"Désolé, je ne peux aboutir à votre demande parce que je suis cassé"
],
"random_skill_errors": [
"Désolé, il semblerait y avoir un problème avec le skill \"%skill_name%\" du domaine \"%domain_name%\"",
"Désolé, j'ai un problème avec le skill \"%skill_name%\" du domaine \"%domain_name%\"",
"Désolé, j'ai une erreur avec le skill \"%skill_name%\" du domaine \"%domain_name%\"",
"Désolé, le skill \"%skill_name%\" du domaine \"%domain_name%\" est cassé"
],
"random_unknown_intents": [
"Désolé, je ne connais pas encore ça, mais vous pouvez m'aider à comprendre en <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">créant une pull request</a>",
"Désolé, vous devriez m'apprendre cette requête. Vous pouvez m'apprendre en <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">créant une pull request</a>",
"Désolé, je ne peux pas répondre à ça. Laissez moi vous répondre à l'avenir en <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">créant une pull request</a>",
"Désolé, vous devez m'éduquer un peu plus. Vous pouvez m'aider avec ça en <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">contribuant à mon code</a>",
"Désolé, je ne comprends pas votre requête",
"random_errors": [
"Désolé, il y a un problème avec mon système",
"Désolé, je ne fonctionne pas correctement",
"Désolé, vous devez me réparer",
"Désolé, je ne peux aboutir à votre demande parce que je suis cassé"
],
"random_skill_errors": [
"Désolé, il semblerait y avoir un problème avec le skill \"%skill_name%\" du domaine \"%domain_name%\"",
"Désolé, j'ai un problème avec le skill \"%skill_name%\" du domaine \"%domain_name%\"",
"Désolé, j'ai une erreur avec le skill \"%skill_name%\" du domaine \"%domain_name%\"",
"Désolé, le skill \"%skill_name%\" du domaine \"%domain_name%\" est cassé"
],
"random_unknown_intents": [
"Désolé, je ne connais pas encore ça, mais vous pouvez m'aider à comprendre en <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">créant une pull request</a>",
"Désolé, vous devriez m'apprendre cette requête. Vous pouvez m'apprendre en <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">créant une pull request</a>",
"Désolé, je ne peux pas répondre à ça. Laissez moi vous répondre à l'avenir en <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">créant une pull request</a>",
"Désolé, vous devez m'éduquer un peu plus. Vous pouvez m'aider avec ça en <a href=\"https://github.com/leon-ai/leon/blob/develop/.github/CONTRIBUTING.md\" target=\"_blank\">contribuant à mon code</a>",
"Désolé, je ne comprends pas votre requête",
"Désolé, je suis encore très jeune, je n'ai pas compris votre demande"
],
"random_not_sure": [
"Désolé, vous pouvez répéter d'une autre façon",
"Désolé, je ne suis pas sûr de comprendre",
"Désolé, je ne suis pas certain de votre demande, merci de répéter d'une manière différente",
"Désolé, merci de répéter à nouveau en formulant différemment",
"Désolé, je n'ai pas nettoyé mes oreilles correctement ! Attendez-voir, je suis votre assistant personnel, je vous prie donc de répéter d'une nouvelle façon"
],
],
"random_not_sure": [
"Désolé, vous pouvez répéter d'une autre façon",
"Désolé, je ne suis pas sûr de comprendre",
"Désolé, je ne suis pas certain de votre demande, merci de répéter d'une manière différente",
"Désolé, merci de répéter à nouveau en formulant différemment",
"Désolé, je n'ai pas nettoyé mes oreilles correctement ! Attendez-voir, je suis votre assistant personnel, je vous prie donc de répéter d'une nouvelle façon"
],
"random_not_able": [
"Désolé, je ne suis pas capable de répondre. J'ai compris ce que vous avez dit, mais je vous prie de répéter d'une autre façon",
"Désolé, j'ai un trou de mémoire, je ne peux pas répondre à ça. J'ai compris ce que vous disiez, mais essayez voir d'une autre façon s'il vous plaît"
@ -58,5 +56,5 @@
"Vous êtes génial, mais je n'ai pas encore appris cette langue",
"Ça ressemble à une lautre langue que je ne peux pas comprendre pour le moment"
]
}
}
}

View File

@ -3,17 +3,13 @@
"rouge": {
"synonyms": ["rouge"],
"data": {
"usage": [
"..."
]
"usage": ["..."]
}
},
"bleu": {
"synonyms": ["bleu"],
"data": {
"usage": [
"..."
]
"usage": ["..."]
}
}
}

View File

@ -3,25 +3,19 @@
"bas": {
"synonyms": ["bas", "basse"],
"data": {
"value": [
"LOW"
]
"value": ["LOW"]
}
},
"moyen": {
"synonyms": ["moyen"],
"data": {
"value": [
"MEDIUM"
]
"value": ["MEDIUM"]
}
},
"haut": {
"synonyms": ["haut", "haute"],
"data": {
"value": [
"HIGH"
]
"value": ["HIGH"]
}
}
}

View File

@ -1,10 +1,9 @@
{
"langs": {
"langs": {
"en-US": {
"short": "en",
"min_confidence": 0.5,
"fallbacks": [
]
"fallbacks": []
},
"fr-FR": {
"short": "fr",
@ -18,5 +17,5 @@
}
]
}
}
}
}

View File

@ -3,9 +3,7 @@
{
"method": "POST",
"route": "/api/action/games/akinator/choose_thematic",
"params": [
"thematic"
],
"params": ["thematic"],
"entitiesType": "trim"
},
{
@ -46,9 +44,7 @@
{
"method": "POST",
"route": "/api/action/games/rochambeau/play",
"params": [
"handsign"
],
"params": ["handsign"],
"entitiesType": "trim"
},
{
@ -124,10 +120,7 @@
{
"method": "POST",
"route": "/api/action/news/github_trends/run",
"params": [
"number",
"daterange"
],
"params": ["number", "daterange"],
"entitiesType": "builtIn"
},
{
@ -138,9 +131,7 @@
{
"method": "POST",
"route": "/api/action/productivity/todo_list/create_list",
"params": [
"list"
],
"params": ["list"],
"entitiesType": "trim"
},
{
@ -151,53 +142,37 @@
{
"method": "POST",
"route": "/api/action/productivity/todo_list/view_list",
"params": [
"list"
],
"params": ["list"],
"entitiesType": "trim"
},
{
"method": "POST",
"route": "/api/action/productivity/todo_list/rename_list",
"params": [
"old_list",
"new_list"
],
"params": ["old_list", "new_list"],
"entitiesType": "trim"
},
{
"method": "POST",
"route": "/api/action/productivity/todo_list/delete_list",
"params": [
"list"
],
"params": ["list"],
"entitiesType": "trim"
},
{
"method": "POST",
"route": "/api/action/productivity/todo_list/add_todos",
"params": [
"todos",
"list"
],
"params": ["todos", "list"],
"entitiesType": "trim"
},
{
"method": "POST",
"route": "/api/action/productivity/todo_list/complete_todos",
"params": [
"todos",
"list"
],
"params": ["todos", "list"],
"entitiesType": "trim"
},
{
"method": "POST",
"route": "/api/action/productivity/todo_list/uncheck_todos",
"params": [
"todos",
"list"
],
"params": ["todos", "list"],
"entitiesType": "trim"
},
{
@ -218,9 +193,7 @@
{
"method": "POST",
"route": "/api/action/utilities/is_it_down/run",
"params": [
"url"
],
"params": ["url"],
"entitiesType": "builtIn"
},
{
@ -234,4 +207,4 @@
"params": []
}
]
}
}

View File

@ -2,6 +2,7 @@
This node enables the wake word "Leon". Once this is running, you can
call Leon by saying his name according to the language you chose.
## Getting Started
### Installation

View File

@ -22,62 +22,61 @@ socket.on('connect', () => {
console.log('Waiting for hotword...')
})
request.get(`${url}/api/v1/info`)
.end((err, res) => {
if (err || !res.ok) {
if (!err.response) {
console.error(`Failed to reach the server: ${err}`)
} else {
console.error(err.response.error.message)
}
request.get(`${url}/api/v1/info`).end((err, res) => {
if (err || !res.ok) {
if (!err.response) {
console.error(`Failed to reach the server: ${err}`)
} else {
const models = new Models()
console.error(err.response.error.message)
}
} else {
const models = new Models()
models.add({
file: `${__dirname}/models/leon-${lang}.pmdl`,
sensitivity: '0.5',
hotwords: `leon-${lang}`
})
models.add({
file: `${__dirname}/models/leon-${lang}.pmdl`,
sensitivity: '0.5',
hotwords: `leon-${lang}`
})
const detector = new Detector({
resource: `${__dirname}/node_modules/@bugsounet/snowboy/resources/common.res`,
models,
audioGain: 2.0,
applyFrontend: true
})
const detector = new Detector({
resource: `${__dirname}/node_modules/@bugsounet/snowboy/resources/common.res`,
models,
audioGain: 2.0,
applyFrontend: true
})
/*detector.on('silence', () => {
/*detector.on('silence', () => {
})*/
detector.on('sound', (/* buffer */) => {
/**
* <buffer> contains the last chunk of the audio that triggers the "sound" event.
* It could be written to a wav stream
*/
})
detector.on('sound', (/* buffer */) => {
/**
* <buffer> contains the last chunk of the audio that triggers the "sound" event.
* It could be written to a wav stream
*/
})
detector.on('error', () => {
console.error('error')
})
detector.on('error', () => {
console.error('error')
})
detector.on('hotword', (index, hotword, buffer) => {
/**
* <buffer> contains the last chunk of the audio that triggers the "hotword" event.
* It could be written to a wav stream. You will have to use it
* together with the <buffer> in the "sound" event if you want to get audio
* data after the hotword
*/
const obj = { hotword, buffer }
detector.on('hotword', (index, hotword, buffer) => {
/**
* <buffer> contains the last chunk of the audio that triggers the "hotword" event.
* It could be written to a wav stream. You will have to use it
* together with the <buffer> in the "sound" event if you want to get audio
* data after the hotword
*/
const obj = { hotword, buffer }
console.log('Hotword detected', obj)
socket.emit('hotword-detected', obj)
})
console.log('Hotword detected', obj)
socket.emit('hotword-detected', obj)
})
const mic = record.start({
threshold: 0,
verbose: false
})
const mic = record.start({
threshold: 0,
verbose: false
})
mic.pipe(detector)
}
})
mic.pipe(detector)
}
})

View File

@ -1,14 +1,7 @@
{
"verbose": false,
"watch": [
"server/src"
],
"watch": ["server/src"],
"ext": "ts,js,json",
"ignore": [
".git",
"node_modules",
"server/src/tmp",
"server/dist"
],
"ignore": [".git", "node_modules", "server/src/tmp", "server/dist"],
"exec": "ts-node server/src/index.ts"
}

20
package-lock.json generated
View File

@ -50,6 +50,7 @@
"babel-plugin-module-resolver": "^4.1.0",
"cli-spinner": "^0.2.10",
"eslint": "^8.22.0",
"eslint-config-prettier": "^8.5.0",
"git-changelog": "^2.0.0",
"husky": "^7.0.0",
"inquirer": "^8.1.0",
@ -9150,6 +9151,18 @@
"url": "https://opencollective.com/eslint"
}
},
"node_modules/eslint-config-prettier": {
"version": "8.5.0",
"resolved": "https://registry.npmjs.org/eslint-config-prettier/-/eslint-config-prettier-8.5.0.tgz",
"integrity": "sha512-obmWKLUNCnhtQRKc+tmnYuQl0pFU1ibYJQ5BGhTVB08bHe9wC8qUeG7c08dj9XX+AuPj1YSGSQIHl1pnDHZR0Q==",
"dev": true,
"bin": {
"eslint-config-prettier": "bin/cli.js"
},
"peerDependencies": {
"eslint": ">=7.0.0"
}
},
"node_modules/eslint-scope": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-5.1.1.tgz",
@ -26295,6 +26308,13 @@
}
}
},
"eslint-config-prettier": {
"version": "8.5.0",
"resolved": "https://registry.npmjs.org/eslint-config-prettier/-/eslint-config-prettier-8.5.0.tgz",
"integrity": "sha512-obmWKLUNCnhtQRKc+tmnYuQl0pFU1ibYJQ5BGhTVB08bHe9wC8qUeG7c08dj9XX+AuPj1YSGSQIHl1pnDHZR0Q==",
"dev": true,
"requires": {}
},
"eslint-scope": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/eslint-scope/-/eslint-scope-5.1.1.tgz",

View File

@ -98,6 +98,7 @@
"babel-plugin-module-resolver": "^4.1.0",
"cli-spinner": "^0.2.10",
"eslint": "^8.22.0",
"eslint-config-prettier": "^8.5.0",
"git-changelog": "^2.0.0",
"husky": "^7.0.0",
"inquirer": "^8.1.0",

View File

@ -5,9 +5,13 @@ import log from '@/helpers/log'
/**
* Build web app
*/
export default () => new Promise(async (resolve) => {
await command('vite --config app/vite.config.js build', { shell: true, stdout: 'inherit' })
export default () =>
new Promise(async (resolve) => {
await command('vite --config app/vite.config.js build', {
shell: true,
stdout: 'inherit'
})
log.success('Web app built')
resolve()
})
log.success('Web app built')
resolve()
})

View File

@ -5,7 +5,7 @@ import buildApp from './build-app'
/**
* Execute the building app script
*/
(async () => {
;(async () => {
try {
await buildApp()
} catch (e) {

View File

@ -2,4 +2,4 @@
<% _.forEach(sections, (section) => { if(section.commitsCount > 0) { %>### <%= section.title %>
<% _.forEach(section.commits, (commit) => { %> - <%= printCommit(commit, true) %><% }) %>
<% _.forEach(section.components.sort((a, b) => a !== b ? a < b ? -1 : 0 : 1), (component) => { %> - **<%= component.name %>:**
<% _.forEach(component.commits, (commit) => { %> <%= (component.commits.length > 1) ? ' -' : '' %> <%= printCommit(commit, true) %><% }) %><% }) %><% } %><% }) %>
<% _.forEach(component.commits, (commit) => { %> <%= (component.commits.length > 1) ? ' -' : '' %> <%= printCommit(commit, true) %><% }) %><% }) %><% } %><% }) %>

View File

@ -1 +1,12 @@
{"lang":"en","domain":"leon","skill":"random_number","action":"run","utterance":"Give me a random number","slots":{},"entities":[],"current_entities":[],"resolvers":[],"current_resolvers":[]}
{
"lang": "en",
"domain": "leon",
"skill": "random_number",
"action": "run",
"utterance": "Give me a random number",
"slots": {},
"entities": [],
"current_entities": [],
"resolvers": [],
"current_resolvers": []
}

View File

@ -6,49 +6,54 @@ import os from '@/helpers/os'
/**
* Check OS environment
*/
export default () => new Promise(async (resolve, reject) => {
log.info('Checking OS environment...')
export default () =>
new Promise(async (resolve, reject) => {
log.info('Checking OS environment...')
const info = os.get()
const info = os.get()
if (info.type === 'windows') {
log.error('Voice offline mode is not available on Windows')
reject()
} else if (info.type === 'unknown') {
log.error('This OS is unknown, please open an issue to let us know about it')
reject()
} else {
try {
log.success(`You are running ${info.name}`)
log.info('Checking tools...')
if (info.type === 'windows') {
log.error('Voice offline mode is not available on Windows')
reject()
} else if (info.type === 'unknown') {
log.error(
'This OS is unknown, please open an issue to let us know about it'
)
reject()
} else {
try {
log.success(`You are running ${info.name}`)
log.info('Checking tools...')
await execa('tar', ['--version'])
log.success('"tar" found')
await execa('make', ['--version'])
log.success('"make" found')
await execa('tar', ['--version'])
log.success('"tar" found')
await execa('make', ['--version'])
log.success('"make" found')
if (info.type === 'macos') {
await execa('brew', ['--version'])
log.success('"brew" found')
await execa('curl', ['--version'])
log.success('"curl" found')
} else if (info.type === 'linux') {
await execa('apt-get', ['--version'])
log.success('"apt-get" found')
await execa('wget', ['--version'])
log.success('"wget" found')
if (info.type === 'macos') {
await execa('brew', ['--version'])
log.success('"brew" found')
await execa('curl', ['--version'])
log.success('"curl" found')
} else if (info.type === 'linux') {
await execa('apt-get', ['--version'])
log.success('"apt-get" found')
await execa('wget', ['--version'])
log.success('"wget" found')
}
resolve()
} catch (e) {
if (e.cmd) {
const cmd = e.cmd.substr(0, e.cmd.indexOf(' '))
log.error(
`The following command has failed: "${e.cmd}". "${cmd}" is maybe missing. To continue this setup, please install the required tool. More details about the failure: ${e}`
)
} else {
log.error(`Failed to prepare the environment: ${e}`)
}
reject(e)
}
resolve()
} catch (e) {
if (e.cmd) {
const cmd = e.cmd.substr(0, e.cmd.indexOf(' '))
log.error(`The following command has failed: "${e.cmd}". "${cmd}" is maybe missing. To continue this setup, please install the required tool. More details about the failure: ${e}`)
} else {
log.error(`Failed to prepare the environment: ${e}`)
}
reject(e)
}
}
})
})

View File

@ -12,230 +12,329 @@ dotenv.config()
* Checking script
* Help to figure out what is installed or not
*/
export default () => new Promise(async (resolve, reject) => {
try {
const nodeMinRequiredVersion = '10'
const npmMinRequiredVersion = '5'
const pythonMinRequiredVersion = '3'
const flitePath = 'bin/flite/flite'
const coquiLanguageModelPath = 'bin/coqui/huge-vocabulary.scorer'
const amazonPath = 'core/config/voice/amazon.json'
const googleCloudPath = 'core/config/voice/google-cloud.json'
const watsonSttPath = 'core/config/voice/watson-stt.json'
const watsonTtsPath = 'core/config/voice/watson-tts.json'
const globalResolversNlpModelPath = 'core/data/models/leon-global-resolvers-model.nlp'
const skillsResolversNlpModelPath = 'core/data/models/leon-skills-resolvers-model.nlp'
const mainNlpModelPath = 'core/data/models/leon-main-model.nlp'
const report = {
can_run: { title: 'Run', type: 'error', v: true },
can_run_skill: { title: 'Run skills', type: 'error', v: true },
can_text: { title: 'Reply you by texting', type: 'error', v: true },
can_amazon_polly_tts: { title: 'Amazon Polly text-to-speech', type: 'warning', v: true },
can_google_cloud_tts: { title: 'Google Cloud text-to-speech', type: 'warning', v: true },
can_watson_tts: { title: 'Watson text-to-speech', type: 'warning', v: true },
can_offline_tts: { title: 'Offline text-to-speech', type: 'warning', v: true },
can_google_cloud_stt: { title: 'Google Cloud speech-to-text', type: 'warning', v: true },
can_watson_stt: { title: 'Watson speech-to-text', type: 'warning', v: true },
can_offline_stt: { title: 'Offline speech-to-text', type: 'warning', v: true }
}
log.title('Checking')
// Leon version checking
log.info('Leon version')
log.success(`${version}\n`);
// Environment checking
(await Promise.all([
command('node --version', { shell: true }),
command('npm --version', { shell: true }),
command('pipenv --version', { shell: true })
])).forEach((p) => {
log.info(p.command)
if (p.command.indexOf('node --version') !== -1
&& !semver.satisfies(semver.clean(p.stdout), `>=${nodeMinRequiredVersion}`)) {
Object.keys(report).forEach((item) => { if (report[item].type === 'error') report[item].v = false })
log.error(`${p.stdout}\nThe Node.js version must be >=${nodeMinRequiredVersion}. Please install it: https://nodejs.org (or use nvm)\n`)
} else if (p.command.indexOf('npm --version') !== -1
&& !semver.satisfies(semver.clean(p.stdout), `>=${npmMinRequiredVersion}`)) {
Object.keys(report).forEach((item) => { if (report[item].type === 'error') report[item].v = false })
log.error(`${p.stdout}\nThe npm version must be >=${npmMinRequiredVersion}. Please install it: https://www.npmjs.com/get-npm (or use nvm)\n`)
} else {
log.success(`${p.stdout}\n`)
}
});
(await Promise.all([
command('pipenv --where', { shell: true }),
command('pipenv run python --version', { shell: true })
])).forEach((p) => {
log.info(p.command)
if (p.command.indexOf('pipenv run python --version') !== -1
&& !semver.satisfies(p.stdout.split(' ')[1], `>=${pythonMinRequiredVersion}`)) {
Object.keys(report).forEach((item) => { if (report[item].type === 'error') report[item].v = false })
log.error(`${p.stdout}\nThe Python version must be >=${pythonMinRequiredVersion}. Please install it: https://www.python.org/downloads\n`)
} else {
log.success(`${p.stdout}\n`)
}
})
// Skill execution checking
export default () =>
new Promise(async (resolve, reject) => {
try {
const p = await command('pipenv run python bridges/python/main.py scripts/assets/intent-object.json', { shell: true })
log.info(p.command)
log.success(`${p.stdout}\n`)
} catch (e) {
log.info(e.command)
report.can_run_skill.v = false
log.error(`${e}\n`)
}
const nodeMinRequiredVersion = '10'
const npmMinRequiredVersion = '5'
const pythonMinRequiredVersion = '3'
const flitePath = 'bin/flite/flite'
const coquiLanguageModelPath = 'bin/coqui/huge-vocabulary.scorer'
const amazonPath = 'core/config/voice/amazon.json'
const googleCloudPath = 'core/config/voice/google-cloud.json'
const watsonSttPath = 'core/config/voice/watson-stt.json'
const watsonTtsPath = 'core/config/voice/watson-tts.json'
const globalResolversNlpModelPath =
'core/data/models/leon-global-resolvers-model.nlp'
const skillsResolversNlpModelPath =
'core/data/models/leon-skills-resolvers-model.nlp'
const mainNlpModelPath = 'core/data/models/leon-main-model.nlp'
const report = {
can_run: { title: 'Run', type: 'error', v: true },
can_run_skill: { title: 'Run skills', type: 'error', v: true },
can_text: { title: 'Reply you by texting', type: 'error', v: true },
can_amazon_polly_tts: {
title: 'Amazon Polly text-to-speech',
type: 'warning',
v: true
},
can_google_cloud_tts: {
title: 'Google Cloud text-to-speech',
type: 'warning',
v: true
},
can_watson_tts: {
title: 'Watson text-to-speech',
type: 'warning',
v: true
},
can_offline_tts: {
title: 'Offline text-to-speech',
type: 'warning',
v: true
},
can_google_cloud_stt: {
title: 'Google Cloud speech-to-text',
type: 'warning',
v: true
},
can_watson_stt: {
title: 'Watson speech-to-text',
type: 'warning',
v: true
},
can_offline_stt: {
title: 'Offline speech-to-text',
type: 'warning',
v: true
}
}
// Global resolvers NLP model checking
log.title('Checking')
log.info('Global resolvers NLP model state')
if (!fs.existsSync(globalResolversNlpModelPath)
|| !Object.keys(fs.readFileSync(globalResolversNlpModelPath)).length) {
report.can_text.v = false
Object.keys(report).forEach((item) => { if (item.indexOf('stt') !== -1 || item.indexOf('tts') !== -1) report[item].v = false })
log.error('Global resolvers NLP model not found or broken. Try to generate a new one: "npm run train"\n')
} else {
log.success('Found and valid\n')
}
// Leon version checking
// Skills resolvers NLP model checking
log.info('Leon version')
log.success(`${version}\n`)
log.info('Skills resolvers NLP model state')
if (!fs.existsSync(skillsResolversNlpModelPath)
|| !Object.keys(fs.readFileSync(skillsResolversNlpModelPath)).length) {
report.can_text.v = false
Object.keys(report).forEach((item) => { if (item.indexOf('stt') !== -1 || item.indexOf('tts') !== -1) report[item].v = false })
log.error('Skills resolvers NLP model not found or broken. Try to generate a new one: "npm run train"\n')
} else {
log.success('Found and valid\n')
}
// Environment checking
;(
await Promise.all([
command('node --version', { shell: true }),
command('npm --version', { shell: true }),
command('pipenv --version', { shell: true })
])
).forEach((p) => {
log.info(p.command)
// Main NLP model checking
if (
p.command.indexOf('node --version') !== -1 &&
!semver.satisfies(
semver.clean(p.stdout),
`>=${nodeMinRequiredVersion}`
)
) {
Object.keys(report).forEach((item) => {
if (report[item].type === 'error') report[item].v = false
})
log.error(
`${p.stdout}\nThe Node.js version must be >=${nodeMinRequiredVersion}. Please install it: https://nodejs.org (or use nvm)\n`
)
} else if (
p.command.indexOf('npm --version') !== -1 &&
!semver.satisfies(
semver.clean(p.stdout),
`>=${npmMinRequiredVersion}`
)
) {
Object.keys(report).forEach((item) => {
if (report[item].type === 'error') report[item].v = false
})
log.error(
`${p.stdout}\nThe npm version must be >=${npmMinRequiredVersion}. Please install it: https://www.npmjs.com/get-npm (or use nvm)\n`
)
} else {
log.success(`${p.stdout}\n`)
}
})
;(
await Promise.all([
command('pipenv --where', { shell: true }),
command('pipenv run python --version', { shell: true })
])
).forEach((p) => {
log.info(p.command)
log.info('Main NLP model state')
if (!fs.existsSync(mainNlpModelPath)
|| !Object.keys(fs.readFileSync(mainNlpModelPath)).length) {
report.can_text.v = false
Object.keys(report).forEach((item) => { if (item.indexOf('stt') !== -1 || item.indexOf('tts') !== -1) report[item].v = false })
log.error('Main NLP model not found or broken. Try to generate a new one: "npm run train"\n')
} else {
log.success('Found and valid\n')
}
if (
p.command.indexOf('pipenv run python --version') !== -1 &&
!semver.satisfies(
p.stdout.split(' ')[1],
`>=${pythonMinRequiredVersion}`
)
) {
Object.keys(report).forEach((item) => {
if (report[item].type === 'error') report[item].v = false
})
log.error(
`${p.stdout}\nThe Python version must be >=${pythonMinRequiredVersion}. Please install it: https://www.python.org/downloads\n`
)
} else {
log.success(`${p.stdout}\n`)
}
})
// TTS checking
// Skill execution checking
log.info('Amazon Polly TTS')
try {
const json = JSON.parse(fs.readFileSync(amazonPath))
if (json.credentials.accessKeyId === '' || json.credentials.secretAccessKey === '') {
try {
const p = await command(
'pipenv run python bridges/python/main.py scripts/assets/intent-object.json',
{ shell: true }
)
log.info(p.command)
log.success(`${p.stdout}\n`)
} catch (e) {
log.info(e.command)
report.can_run_skill.v = false
log.error(`${e}\n`)
}
// Global resolvers NLP model checking
log.info('Global resolvers NLP model state')
if (
!fs.existsSync(globalResolversNlpModelPath) ||
!Object.keys(fs.readFileSync(globalResolversNlpModelPath)).length
) {
report.can_text.v = false
Object.keys(report).forEach((item) => {
if (item.indexOf('stt') !== -1 || item.indexOf('tts') !== -1)
report[item].v = false
})
log.error(
'Global resolvers NLP model not found or broken. Try to generate a new one: "npm run train"\n'
)
} else {
log.success('Found and valid\n')
}
// Skills resolvers NLP model checking
log.info('Skills resolvers NLP model state')
if (
!fs.existsSync(skillsResolversNlpModelPath) ||
!Object.keys(fs.readFileSync(skillsResolversNlpModelPath)).length
) {
report.can_text.v = false
Object.keys(report).forEach((item) => {
if (item.indexOf('stt') !== -1 || item.indexOf('tts') !== -1)
report[item].v = false
})
log.error(
'Skills resolvers NLP model not found or broken. Try to generate a new one: "npm run train"\n'
)
} else {
log.success('Found and valid\n')
}
// Main NLP model checking
log.info('Main NLP model state')
if (
!fs.existsSync(mainNlpModelPath) ||
!Object.keys(fs.readFileSync(mainNlpModelPath)).length
) {
report.can_text.v = false
Object.keys(report).forEach((item) => {
if (item.indexOf('stt') !== -1 || item.indexOf('tts') !== -1)
report[item].v = false
})
log.error(
'Main NLP model not found or broken. Try to generate a new one: "npm run train"\n'
)
} else {
log.success('Found and valid\n')
}
// TTS checking
log.info('Amazon Polly TTS')
try {
const json = JSON.parse(fs.readFileSync(amazonPath))
if (
json.credentials.accessKeyId === '' ||
json.credentials.secretAccessKey === ''
) {
report.can_amazon_polly_tts.v = false
log.warning('Amazon Polly TTS is not yet configured\n')
} else {
log.success('Configured\n')
}
} catch (e) {
report.can_amazon_polly_tts.v = false
log.warning('Amazon Polly TTS is not yet configured\n')
} else {
log.success('Configured\n')
log.warning(`Amazon Polly TTS is not yet configured: ${e}\n`)
}
} catch (e) {
report.can_amazon_polly_tts.v = false
log.warning(`Amazon Polly TTS is not yet configured: ${e}\n`)
}
log.info('Google Cloud TTS/STT')
try {
const json = JSON.parse(fs.readFileSync(googleCloudPath))
const results = []
Object.keys(json).forEach((item) => { if (json[item] === '') results.push(false) })
if (results.includes(false)) {
log.info('Google Cloud TTS/STT')
try {
const json = JSON.parse(fs.readFileSync(googleCloudPath))
const results = []
Object.keys(json).forEach((item) => {
if (json[item] === '') results.push(false)
})
if (results.includes(false)) {
report.can_google_cloud_tts.v = false
report.can_google_cloud_stt.v = false
log.warning('Google Cloud TTS/STT is not yet configured\n')
} else {
log.success('Configured\n')
}
} catch (e) {
report.can_google_cloud_tts.v = false
report.can_google_cloud_stt.v = false
log.warning('Google Cloud TTS/STT is not yet configured\n')
} else {
log.success('Configured\n')
log.warning(`Google Cloud TTS/STT is not yet configured: ${e}\n`)
}
} catch (e) {
report.can_google_cloud_tts.v = false
report.can_google_cloud_stt.v = false
log.warning(`Google Cloud TTS/STT is not yet configured: ${e}\n`)
}
log.info('Watson TTS')
try {
const json = JSON.parse(fs.readFileSync(watsonTtsPath))
const results = []
Object.keys(json).forEach((item) => { if (json[item] === '') results.push(false) })
if (results.includes(false)) {
log.info('Watson TTS')
try {
const json = JSON.parse(fs.readFileSync(watsonTtsPath))
const results = []
Object.keys(json).forEach((item) => {
if (json[item] === '') results.push(false)
})
if (results.includes(false)) {
report.can_watson_tts.v = false
log.warning('Watson TTS is not yet configured\n')
} else {
log.success('Configured\n')
}
} catch (e) {
report.can_watson_tts.v = false
log.warning('Watson TTS is not yet configured\n')
} else {
log.success('Configured\n')
log.warning(`Watson TTS is not yet configured: ${e}\n`)
}
} catch (e) {
report.can_watson_tts.v = false
log.warning(`Watson TTS is not yet configured: ${e}\n`)
}
log.info('Offline TTS')
if (!fs.existsSync(flitePath)) {
report.can_offline_tts.v = false
log.warning(`Cannot find ${flitePath}. You can setup the offline TTS by running: "npm run setup:offline-tts"\n`)
} else {
log.success(`Found Flite at ${flitePath}\n`)
}
log.info('Offline TTS')
if (!fs.existsSync(flitePath)) {
report.can_offline_tts.v = false
log.warning(
`Cannot find ${flitePath}. You can setup the offline TTS by running: "npm run setup:offline-tts"\n`
)
} else {
log.success(`Found Flite at ${flitePath}\n`)
}
log.info('Watson STT')
try {
const json = JSON.parse(fs.readFileSync(watsonSttPath))
const results = []
Object.keys(json).forEach((item) => { if (json[item] === '') results.push(false) })
if (results.includes(false)) {
log.info('Watson STT')
try {
const json = JSON.parse(fs.readFileSync(watsonSttPath))
const results = []
Object.keys(json).forEach((item) => {
if (json[item] === '') results.push(false)
})
if (results.includes(false)) {
report.can_watson_stt.v = false
log.warning('Watson STT is not yet configured\n')
} else {
log.success('Configured\n')
}
} catch (e) {
report.can_watson_stt.v = false
log.warning('Watson STT is not yet configured\n')
} else {
log.success('Configured\n')
log.warning(`Watson STT is not yet configured: ${e}`)
}
log.info('Offline STT')
if (!fs.existsSync(coquiLanguageModelPath)) {
report.can_offline_stt.v = false
log.warning(
`Cannot find ${coquiLanguageModelPath}. You can setup the offline STT by running: "npm run setup:offline-stt"`
)
} else {
log.success(`Found Coqui language model at ${coquiLanguageModelPath}`)
}
// Report
log.title('Report')
log.info('Here is the diagnosis about your current setup')
Object.keys(report).forEach((item) => {
if (report[item].v === true) {
log.success(report[item].title)
} else {
log[report[item].type](report[item].title)
}
})
log.default('')
if (report.can_run.v && report.can_run_skill.v && report.can_text.v) {
log.success('Hooray! Leon can run correctly')
log.info(
'If you have some yellow warnings, it is all good. It means some entities are not yet configured'
)
} else {
log.error('Please fix the errors above')
}
resolve()
} catch (e) {
report.can_watson_stt.v = false
log.warning(`Watson STT is not yet configured: ${e}`)
log.error(e)
reject()
}
log.info('Offline STT')
if (!fs.existsSync(coquiLanguageModelPath)) {
report.can_offline_stt.v = false
log.warning(`Cannot find ${coquiLanguageModelPath}. You can setup the offline STT by running: "npm run setup:offline-stt"`)
} else {
log.success(`Found Coqui language model at ${coquiLanguageModelPath}`)
}
// Report
log.title('Report')
log.info('Here is the diagnosis about your current setup')
Object.keys(report).forEach((item) => {
if (report[item].v === true) {
log.success(report[item].title)
} else {
log[report[item].type](report[item].title)
}
})
log.default('')
if (report.can_run.v && report.can_run_skill.v && report.can_text.v) {
log.success('Hooray! Leon can run correctly')
log.info('If you have some yellow warnings, it is all good. It means some entities are not yet configured')
} else {
log.error('Please fix the errors above')
}
resolve()
} catch (e) {
log.error(e)
reject()
}
})
})

View File

@ -7,35 +7,41 @@ import domain from '@/helpers/domain'
/**
* This script delete test DB files if they exist
*/
export default () => new Promise(async (resolve, reject) => {
log.info('Cleaning test DB files...')
export default () =>
new Promise(async (resolve, reject) => {
log.info('Cleaning test DB files...')
const [domainKeys, domains] = await Promise.all([domain.list(), domain.getDomainsObj()])
const [domainKeys, domains] = await Promise.all([
domain.list(),
domain.getDomainsObj()
])
for (let i = 0; i < domainKeys.length; i += 1) {
const currentDomain = domains[domainKeys[i]]
const skillKeys = Object.keys(currentDomain.skills)
for (let i = 0; i < domainKeys.length; i += 1) {
const currentDomain = domains[domainKeys[i]]
const skillKeys = Object.keys(currentDomain.skills)
for (let j = 0; j < skillKeys.length; j += 1) {
const currentSkill = currentDomain.skills[skillKeys[j]]
for (let j = 0; j < skillKeys.length; j += 1) {
const currentSkill = currentDomain.skills[skillKeys[j]]
try {
// TODO: handle case where the memory folder contain multiple DB nodes
const dbFolder = join(currentSkill.path, 'memory')
const dbTestFiles = fs.readdirSync(dbFolder).filter((entity) => entity.indexOf('.spec.json') !== -1)
try {
// TODO: handle case where the memory folder contain multiple DB nodes
const dbFolder = join(currentSkill.path, 'memory')
const dbTestFiles = fs
.readdirSync(dbFolder)
.filter((entity) => entity.indexOf('.spec.json') !== -1)
if (dbTestFiles.length > 0) {
log.info(`Deleting ${dbTestFiles[0]}...`)
fs.unlinkSync(join(dbFolder, dbTestFiles[0]))
log.success(`${dbTestFiles[0]} deleted`)
if (dbTestFiles.length > 0) {
log.info(`Deleting ${dbTestFiles[0]}...`)
fs.unlinkSync(join(dbFolder, dbTestFiles[0]))
log.success(`${dbTestFiles[0]} deleted`)
}
} catch (e) {
log.error(`Failed to clean: "${skillKeys[j]}" test DB file`)
reject(e)
}
} catch (e) {
log.error(`Failed to clean: "${skillKeys[j]}" test DB file`)
reject(e)
}
}
}
log.success('Cleaning done')
resolve()
})
log.success('Cleaning done')
resolve()
})

View File

@ -13,7 +13,8 @@ const commitEditMsgFile = '.git/COMMIT_EDITMSG'
if (fs.existsSync(commitEditMsgFile)) {
try {
const commitMessage = fs.readFileSync(commitEditMsgFile, 'utf8')
const regex = '(build|BREAKING|chore|ci|docs|feat|fix|perf|refactor|style|test)(\\((web app|docker|server|hotword|skill\\/([\\w-]+)))?\\)?: .{1,50}' // eslint-disable-line no-useless-escape
const regex =
'(build|BREAKING|chore|ci|docs|feat|fix|perf|refactor|style|test)(\\((web app|docker|server|hotword|skill\\/([\\w-]+)))?\\)?: .{1,50}' // eslint-disable-line no-useless-escape
if (commitMessage.match(regex) !== null) {
log.success('Commit message validated')

View File

@ -13,59 +13,64 @@ dotenv.config()
* Generate HTTP API key script
* save it in the .env file
*/
const generateHttpApiKey = () => new Promise(async (resolve, reject) => {
log.info('Generating the HTTP API key...')
const generateHttpApiKey = () =>
new Promise(async (resolve, reject) => {
log.info('Generating the HTTP API key...')
try {
const shasum = crypto.createHash('sha1')
const str = string.random(11)
const dotEnvPath = path.join(process.cwd(), '.env')
const envVarKey = 'LEON_HTTP_API_KEY'
let content = fs.readFileSync(dotEnvPath, 'utf8')
try {
const shasum = crypto.createHash('sha1')
const str = string.random(11)
const dotEnvPath = path.join(process.cwd(), '.env')
const envVarKey = 'LEON_HTTP_API_KEY'
let content = fs.readFileSync(dotEnvPath, 'utf8')
shasum.update(str)
const sha1 = shasum.digest('hex')
shasum.update(str)
const sha1 = shasum.digest('hex')
let lines = content.split('\n')
lines = lines.map((line) => {
if (line.indexOf(`${envVarKey}=`) !== -1) {
line = `${envVarKey}=${sha1}`
}
let lines = content.split('\n')
lines = lines.map((line) => {
if (line.indexOf(`${envVarKey}=`) !== -1) {
line = `${envVarKey}=${sha1}`
}
return line
})
content = lines.join('\n')
fs.writeFileSync(dotEnvPath, content)
log.success('HTTP API key generated')
resolve()
} catch (e) {
log.error(e.message)
reject(e)
}
})
export default () => new Promise(async (resolve, reject) => {
try {
if (!process.env.LEON_HTTP_API_KEY || process.env.LEON_HTTP_API_KEY === '') {
await generateHttpApiKey()
} else if (!process.env.IS_DOCKER) {
const answer = await prompt({
type: 'confirm',
name: 'generate.httpApiKey',
message: 'Do you want to regenerate the HTTP API key?',
default: false
return line
})
if (answer.generate.httpApiKey === true) {
await generateHttpApiKey()
}
}
content = lines.join('\n')
resolve()
} catch (e) {
reject(e)
}
})
fs.writeFileSync(dotEnvPath, content)
log.success('HTTP API key generated')
resolve()
} catch (e) {
log.error(e.message)
reject(e)
}
})
export default () =>
new Promise(async (resolve, reject) => {
try {
if (
!process.env.LEON_HTTP_API_KEY ||
process.env.LEON_HTTP_API_KEY === ''
) {
await generateHttpApiKey()
} else if (!process.env.IS_DOCKER) {
const answer = await prompt({
type: 'confirm',
name: 'generate.httpApiKey',
message: 'Do you want to regenerate the HTTP API key?',
default: false
})
if (answer.generate.httpApiKey === true) {
await generateHttpApiKey()
}
}
resolve()
} catch (e) {
reject(e)
}
})

View File

@ -14,121 +14,148 @@ dotenv.config()
* Parse and convert skills config into a JSON file understandable by Fastify
* to dynamically generate endpoints so skills can be accessible over HTTP
*/
export default () => new Promise(async (resolve, reject) => {
const supportedMethods = ['DELETE', 'GET', 'HEAD', 'PATCH', 'POST', 'PUT', 'OPTIONS']
const outputFile = '/core/skills-endpoints.json'
const outputFilePath = path.join(__dirname, `../..${outputFile}`)
const lang = langs[process.env.LEON_HTTP_API_LANG].short
export default () =>
new Promise(async (resolve, reject) => {
const supportedMethods = [
'DELETE',
'GET',
'HEAD',
'PATCH',
'POST',
'PUT',
'OPTIONS'
]
const outputFile = '/core/skills-endpoints.json'
const outputFilePath = path.join(__dirname, `../..${outputFile}`)
const lang = langs[process.env.LEON_HTTP_API_LANG].short
try {
const [domainKeys, domains] = await Promise.all([domain.list(), domain.getDomainsObj()])
const finalObj = {
endpoints: []
}
let isFileNeedToBeGenerated = true
let loopIsBroken = false
try {
const [domainKeys, domains] = await Promise.all([
domain.list(),
domain.getDomainsObj()
])
const finalObj = {
endpoints: []
}
let isFileNeedToBeGenerated = true
let loopIsBroken = false
// Check if a new routing generation is necessary
if (fs.existsSync(outputFilePath)) {
const mtimeEndpoints = fs.statSync(outputFilePath).mtime.getTime()
// Check if a new routing generation is necessary
if (fs.existsSync(outputFilePath)) {
const mtimeEndpoints = fs.statSync(outputFilePath).mtime.getTime()
for (let i = 0; i < domainKeys.length; i += 1) {
const currentDomain = domains[domainKeys[i]]
const skillKeys = Object.keys(currentDomain.skills)
for (let i = 0; i < domainKeys.length; i += 1) {
const currentDomain = domains[domainKeys[i]]
const skillKeys = Object.keys(currentDomain.skills)
// Browse skills
for (let j = 0; j < skillKeys.length; j += 1) {
const skillFriendlyName = skillKeys[j]
const currentSkill = currentDomain.skills[skillFriendlyName]
const fileInfo = fs.statSync(path.join(currentSkill.path, 'config', `${lang}.json`))
const mtime = fileInfo.mtime.getTime()
// Browse skills
for (let j = 0; j < skillKeys.length; j += 1) {
const skillFriendlyName = skillKeys[j]
const currentSkill = currentDomain.skills[skillFriendlyName]
const fileInfo = fs.statSync(
path.join(currentSkill.path, 'config', `${lang}.json`)
)
const mtime = fileInfo.mtime.getTime()
if (mtime > mtimeEndpoints) {
loopIsBroken = true
if (mtime > mtimeEndpoints) {
loopIsBroken = true
break
}
}
if (loopIsBroken) {
break
}
}
if (loopIsBroken) {
break
}
if ((i + 1) === domainKeys.length) {
log.success(`${outputFile} is already up-to-date`)
isFileNeedToBeGenerated = false
if (i + 1 === domainKeys.length) {
log.success(`${outputFile} is already up-to-date`)
isFileNeedToBeGenerated = false
}
}
}
}
// Force if a language is given
if (isFileNeedToBeGenerated) {
log.info('Parsing skills configuration...')
// Force if a language is given
if (isFileNeedToBeGenerated) {
log.info('Parsing skills configuration...')
for (let i = 0; i < domainKeys.length; i += 1) {
const currentDomain = domains[domainKeys[i]]
const skillKeys = Object.keys(currentDomain.skills)
for (let i = 0; i < domainKeys.length; i += 1) {
const currentDomain = domains[domainKeys[i]]
const skillKeys = Object.keys(currentDomain.skills)
// Browse skills
for (let j = 0; j < skillKeys.length; j += 1) {
const skillFriendlyName = skillKeys[j]
const currentSkill = currentDomain.skills[skillFriendlyName]
// Browse skills
for (let j = 0; j < skillKeys.length; j += 1) {
const skillFriendlyName = skillKeys[j]
const currentSkill = currentDomain.skills[skillFriendlyName]
const configFilePath = path.join(currentSkill.path, 'config', `${lang}.json`)
const { actions } = JSON.parse(fs.readFileSync(configFilePath, 'utf8'))
const actionsKeys = Object.keys(actions)
const configFilePath = path.join(
currentSkill.path,
'config',
`${lang}.json`
)
const { actions } = JSON.parse(
fs.readFileSync(configFilePath, 'utf8')
)
const actionsKeys = Object.keys(actions)
for (let k = 0; k < actionsKeys.length; k += 1) {
const action = actionsKeys[k]
const actionObj = actions[action]
const { entities, http_api } = actionObj // eslint-disable-line camelcase
let finalMethod = (entities || http_api?.entities) ? 'POST' : 'GET'
for (let k = 0; k < actionsKeys.length; k += 1) {
const action = actionsKeys[k]
const actionObj = actions[action]
const { entities, http_api } = actionObj // eslint-disable-line camelcase
let finalMethod = entities || http_api?.entities ? 'POST' : 'GET'
// Only generate this route if it is not disabled from the skill config
if (!http_api?.disabled || (http_api?.disabled && http_api?.disabled === false)) {
if (http_api?.method) {
finalMethod = http_api.method.toUpperCase()
// Only generate this route if it is not disabled from the skill config
if (
!http_api?.disabled ||
(http_api?.disabled && http_api?.disabled === false)
) {
if (http_api?.method) {
finalMethod = http_api.method.toUpperCase()
}
if (!supportedMethods.includes(finalMethod)) {
reject(
`The "${finalMethod}" HTTP method of the ${currentDomain.name}/${currentSkill.name}/${action} action is not supported`
)
}
const endpoint = {
method: finalMethod.toUpperCase(),
route: `/api/action/${currentDomain.name}/${currentSkill.name}/${action}`,
params: []
}
if (http_api?.timeout) {
endpoint.timeout = http_api.timeout
}
if (entities) {
// Handle explicit trim entities
endpoint.entitiesType = 'trim'
endpoint.params = entities.map((entity) => entity.name)
} else if (http_api?.entities) {
// Handle built-in entities
endpoint.entitiesType = 'builtIn'
endpoint.params = http_api.entities.map(
(entity) => entity.entity
)
}
finalObj.endpoints.push(endpoint)
}
if (!supportedMethods.includes(finalMethod)) {
reject(`The "${finalMethod}" HTTP method of the ${currentDomain.name}/${currentSkill.name}/${action} action is not supported`)
}
const endpoint = {
method: finalMethod.toUpperCase(),
route: `/api/action/${currentDomain.name}/${currentSkill.name}/${action}`,
params: []
}
if (http_api?.timeout) {
endpoint.timeout = http_api.timeout
}
if (entities) {
// Handle explicit trim entities
endpoint.entitiesType = 'trim'
endpoint.params = entities.map((entity) => entity.name)
} else if (http_api?.entities) {
// Handle built-in entities
endpoint.entitiesType = 'builtIn'
endpoint.params = http_api.entities.map((entity) => entity.entity)
}
finalObj.endpoints.push(endpoint)
}
}
}
}
log.info(`Writing ${outputFile} file...`)
try {
fs.writeFileSync(outputFilePath, JSON.stringify(finalObj, null, 2))
log.success(`${outputFile} file generated`)
resolve()
} catch (e) {
reject(`Failed to generate ${outputFile} file: ${e.message}`)
log.info(`Writing ${outputFile} file...`)
try {
fs.writeFileSync(outputFilePath, JSON.stringify(finalObj, null, 2))
log.success(`${outputFile} file generated`)
resolve()
} catch (e) {
reject(`Failed to generate ${outputFile} file: ${e.message}`)
}
}
} catch (e) {
log.error(e.message)
reject(e)
}
} catch (e) {
log.error(e.message)
reject(e)
}
})
})

View File

@ -5,7 +5,7 @@ import generateHttpApiKey from './generate-http-api-key'
/**
* Execute the generating HTTP API key script
*/
(async () => {
;(async () => {
try {
await generateHttpApiKey()
} catch (e) {

View File

@ -5,7 +5,7 @@ import generateSkillsEndpoints from './generate-skills-endpoints'
/**
* Execute the generating skills endpoints script
*/
(async () => {
;(async () => {
try {
await generateSkillsEndpoints()
} catch (e) {

View File

@ -6,7 +6,7 @@ import loader from '@/helpers/loader'
/**
* This script ensures the correct coding syntax of the whole project
*/
(async () => {
;(async () => {
loader.start()
log.info('Linting...')
@ -25,13 +25,17 @@ import loader from '@/helpers/loader'
'"test/json/!**!/!*.js"',
'"test/unit/!**!/!*.js"'*/
]
const src = globs.join(' ')
await command(`npx eslint ${globs.join(' ')}`, { shell: true })
await command(
`prettier --write . --ignore-path .gitignore && eslint ${src} --ignore-path .gitignore && prettier --check ${src} --ignore-path .gitignore`,
{ shell: true }
)
log.success('Looks great')
loader.stop()
} catch (e) {
log.error(`Does not look great: ${e.stdout}`)
log.error(`Does not look great: ${e.message}`)
loader.stop()
process.exit(1)
}

View File

@ -6,48 +6,55 @@ import log from '@/helpers/log'
/**
* Update version number in files which need version number
*/
export default (version) => new Promise(async (resolve, reject) => {
const changelog = 'CHANGELOG.md'
const tmpChangelog = 'TMP-CHANGELOG.md'
export default (version) =>
new Promise(async (resolve, reject) => {
const changelog = 'CHANGELOG.md'
const tmpChangelog = 'TMP-CHANGELOG.md'
log.info(`Generating ${changelog}...`)
log.info(`Generating ${changelog}...`)
try {
await command(`git-changelog --changelogrc .changelogrc --template scripts/assets/CHANGELOG-TEMPLATE.md --file scripts/tmp/${tmpChangelog} --version_name ${version}`, { shell: true })
} catch (e) {
log.error(`Error during git-changelog: ${e}`)
reject(e)
}
try {
log.info('Getting remote origin URL...')
log.info('Getting previous tag...')
const sh = await command('git config --get remote.origin.url && git tag | tail -n1', { shell: true })
const repoUrl = sh.stdout.substr(0, sh.stdout.lastIndexOf('.git'))
const previousTag = sh.stdout.substr(sh.stdout.indexOf('\n') + 1).trim()
const changelogData = fs.readFileSync(changelog, 'utf8')
const compareUrl = `${repoUrl}/compare/${previousTag}...v${version}`
let tmpData = fs.readFileSync(`scripts/tmp/${tmpChangelog}`, 'utf8')
log.success(`Remote origin URL gotten: ${repoUrl}.git`)
log.success(`Previous tag gotten: ${previousTag}`)
if (previousTag !== '') {
tmpData = tmpData.replace(version, `[${version}](${compareUrl})`)
try {
await command(
`git-changelog --changelogrc .changelogrc --template scripts/assets/CHANGELOG-TEMPLATE.md --file scripts/tmp/${tmpChangelog} --version_name ${version}`,
{ shell: true }
)
} catch (e) {
log.error(`Error during git-changelog: ${e}`)
reject(e)
}
fs.writeFile(changelog, `${tmpData}${changelogData}`, (err) => {
if (err) log.error(`Failed to write into file: ${err}`)
else {
fs.unlinkSync(`scripts/tmp/${tmpChangelog}`)
log.success(`${changelog} generated`)
resolve()
try {
log.info('Getting remote origin URL...')
log.info('Getting previous tag...')
const sh = await command(
'git config --get remote.origin.url && git tag | tail -n1',
{ shell: true }
)
const repoUrl = sh.stdout.substr(0, sh.stdout.lastIndexOf('.git'))
const previousTag = sh.stdout.substr(sh.stdout.indexOf('\n') + 1).trim()
const changelogData = fs.readFileSync(changelog, 'utf8')
const compareUrl = `${repoUrl}/compare/${previousTag}...v${version}`
let tmpData = fs.readFileSync(`scripts/tmp/${tmpChangelog}`, 'utf8')
log.success(`Remote origin URL gotten: ${repoUrl}.git`)
log.success(`Previous tag gotten: ${previousTag}`)
if (previousTag !== '') {
tmpData = tmpData.replace(version, `[${version}](${compareUrl})`)
}
})
} catch (e) {
log.error(`Error during git commands: ${e}`)
reject(e)
}
})
fs.writeFile(changelog, `${tmpData}${changelogData}`, (err) => {
if (err) log.error(`Failed to write into file: ${err}`)
else {
fs.unlinkSync(`scripts/tmp/${tmpChangelog}`)
log.success(`${changelog} generated`)
resolve()
}
})
} catch (e) {
log.error(`Error during git commands: ${e}`)
reject(e)
}
})

View File

@ -7,13 +7,14 @@ import generateChangelog from './generate-changelog'
/**
* Main entry of the release preparation
*/
(async () => {
;(async () => {
loader.start()
log.info('Preparing for release...')
const { argv } = process
const version = argv[2].toLowerCase()
const semverRegex = /^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(-(0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(\.(0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*)?(\+[0-9a-zA-Z-]+(\.[0-9a-zA-Z-]+)*)?$/
const semverRegex =
/^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(-(0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(\.(0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*)?(\+[0-9a-zA-Z-]+(\.[0-9a-zA-Z-]+)*)?$/
if (version.match(semverRegex) !== null) {
try {
@ -27,7 +28,9 @@ import generateChangelog from './generate-changelog'
loader.stop()
}
} else {
log.error('The version number does match the Semantic Versioning rules (https://semver.org)')
log.error(
'The version number does match the Semantic Versioning rules (https://semver.org)'
)
loader.stop()
}
})()

View File

@ -5,26 +5,28 @@ import log from '@/helpers/log'
/**
* Update version number in files which need version number
*/
export default (version) => new Promise(async (resolve, reject) => {
log.info('Updating version...')
export default (version) =>
new Promise(async (resolve, reject) => {
log.info('Updating version...')
const promises = []
const files = [
'package.json',
'package-lock.json'
]
const promises = []
const files = ['package.json', 'package-lock.json']
for (let i = 0; i < files.length; i += 1) {
promises.push(command(`json -I -f ${files[i]} -e 'this.version="${version}"'`, { shell: true }))
}
for (let i = 0; i < files.length; i += 1) {
promises.push(
command(`json -I -f ${files[i]} -e 'this.version="${version}"'`, {
shell: true
})
)
}
try {
await Promise.all(promises)
try {
await Promise.all(promises)
log.success(`Version updated to ${version}`)
resolve()
} catch (e) {
log.error(`Error while updating version: ${e.stderr}`)
reject(e)
}
})
log.success(`Version updated to ${version}`)
resolve()
} catch (e) {
log.error(`Error while updating version: ${e.stderr}`)
reject(e)
}
})

View File

@ -5,7 +5,7 @@ import check from './check'
/**
* Execute the checking script
*/
(async () => {
;(async () => {
try {
loader.start()
await check()

View File

@ -5,7 +5,7 @@ import cleanTestDbs from './clean-test-dbs'
/**
* Execute the cleaning test DBs script
*/
(async () => {
;(async () => {
try {
await cleanTestDbs()
} catch (e) {

View File

@ -1,11 +1,11 @@
import log from '@/helpers/log'
import setupHotword from './setup-hotword';
import setupHotword from './setup-hotword'
/**
* Execute the setup offline hotword script
*/
(async () => {
;(async () => {
try {
await setupHotword()
} catch (e) {

View File

@ -1,11 +1,11 @@
import log from '@/helpers/log'
import setupStt from './setup-stt';
import setupStt from './setup-stt'
/**
* Execute the setup offline STT script
*/
(async () => {
;(async () => {
try {
await setupStt()
} catch (e) {

View File

@ -1,11 +1,11 @@
import log from '@/helpers/log'
import setupTts from './setup-tts';
import setupTts from './setup-tts'
/**
* Execute the setup offline TTS script
*/
(async () => {
;(async () => {
try {
await setupTts()
} catch (e) {

View File

@ -6,43 +6,47 @@ import os from '@/helpers/os'
/**
* Setup offline hotword detection
*/
export default () => new Promise(async (resolve, reject) => {
log.info('Setting up offline hotword detection...')
export default () =>
new Promise(async (resolve, reject) => {
log.info('Setting up offline hotword detection...')
const info = os.get()
let pkgm = 'apt-get install'
if (info.type === 'macos') {
pkgm = 'brew'
}
if (info.type === 'windows') {
log.error('Voice offline mode is not available on Windows')
reject()
} else {
try {
log.info('Installing dependencies...')
let cmd = `sudo ${pkgm} sox libsox-fmt-all -y`
if (info.type === 'linux') {
log.info(`Executing the following command: ${cmd}`)
await command(cmd, { shell: true })
} else if (info.type === 'macos') {
cmd = `${pkgm} install swig portaudio sox`
log.info(`Executing the following command: ${cmd}`)
await command(cmd, { shell: true })
}
log.success('System dependencies downloaded')
log.info('Installing hotword dependencies...')
await command('cd hotword && npm install', { shell: true })
log.success('Offline hotword detection installed')
await command('cd hotword/node_modules/@bugsounet/snowboy && CXXFLAGS="--std=c++17" ../../../node_modules/@mapbox/node-pre-gyp/bin/node-pre-gyp clean configure build', { shell: true })
log.success('Snowboy bindings compiled')
resolve()
} catch (e) {
log.error(`Failed to install offline hotword detection: ${e}`)
reject(e)
const info = os.get()
let pkgm = 'apt-get install'
if (info.type === 'macos') {
pkgm = 'brew'
}
}
})
if (info.type === 'windows') {
log.error('Voice offline mode is not available on Windows')
reject()
} else {
try {
log.info('Installing dependencies...')
let cmd = `sudo ${pkgm} sox libsox-fmt-all -y`
if (info.type === 'linux') {
log.info(`Executing the following command: ${cmd}`)
await command(cmd, { shell: true })
} else if (info.type === 'macos') {
cmd = `${pkgm} install swig portaudio sox`
log.info(`Executing the following command: ${cmd}`)
await command(cmd, { shell: true })
}
log.success('System dependencies downloaded')
log.info('Installing hotword dependencies...')
await command('cd hotword && npm install', { shell: true })
log.success('Offline hotword detection installed')
await command(
'cd hotword/node_modules/@bugsounet/snowboy && CXXFLAGS="--std=c++17" ../../../node_modules/@mapbox/node-pre-gyp/bin/node-pre-gyp clean configure build',
{ shell: true }
)
log.success('Snowboy bindings compiled')
resolve()
} catch (e) {
log.error(`Failed to install offline hotword detection: ${e}`)
reject(e)
}
}
})

View File

@ -8,12 +8,12 @@ import setupHotword from './setup-hotword'
import setupTts from './setup-tts'
import setupStt from './setup-stt'
dotenv.config();
dotenv.config()
/**
* Main entry to setup offline components
*/
(async () => {
;(async () => {
try {
loader.start()
await checkOs()

View File

@ -7,37 +7,50 @@ import os from '@/helpers/os'
/**
* Setup offline speech-to-text
*/
export default () => new Promise(async (resolve, reject) => {
log.info('Setting up offline speech-to-text...')
export default () =>
new Promise(async (resolve, reject) => {
log.info('Setting up offline speech-to-text...')
const destCoquiFolder = 'bin/coqui'
const tmpDir = 'scripts/tmp'
// check this repo for updates: https://github.com/coqui-ai/STT-models/tree/main/english/coqui
const coquiModelVersion = '1.0.0'
let downloader = 'wget'
if (os.get().type === 'macos') {
downloader = 'curl -L -O'
}
if (!fs.existsSync(`${destCoquiFolder}/model.tflite`)) {
try {
log.info('Downloading pre-trained model...')
await command(`cd ${tmpDir} && ${downloader} https://github.com/coqui-ai/STT-models/releases/download/english/coqui/v${coquiModelVersion}-huge-vocab/model.tflite`, { shell: true })
await command(`cd ${tmpDir} && ${downloader} https://github.com/coqui-ai/STT-models/releases/download/english/coqui/v${coquiModelVersion}-huge-vocab/huge-vocabulary.scorer`, { shell: true })
log.success('Pre-trained model download done')
log.info('Moving...')
await command(`mv -f ${tmpDir}/model.tflite ${destCoquiFolder}/model.tflite`, { shell: true })
await command(`mv -f ${tmpDir}/huge-vocabulary.scorer ${destCoquiFolder}/huge-vocabulary.scorer`, { shell: true })
log.success('Move done')
log.success('Offline speech-to-text installed')
resolve()
} catch (e) {
log.error(`Failed to install offline speech-to-text: ${e}`)
reject(e)
const destCoquiFolder = 'bin/coqui'
const tmpDir = 'scripts/tmp'
// check this repo for updates: https://github.com/coqui-ai/STT-models/tree/main/english/coqui
const coquiModelVersion = '1.0.0'
let downloader = 'wget'
if (os.get().type === 'macos') {
downloader = 'curl -L -O'
}
} else {
log.success('Offline speech-to-text is already installed')
resolve()
}
})
if (!fs.existsSync(`${destCoquiFolder}/model.tflite`)) {
try {
log.info('Downloading pre-trained model...')
await command(
`cd ${tmpDir} && ${downloader} https://github.com/coqui-ai/STT-models/releases/download/english/coqui/v${coquiModelVersion}-huge-vocab/model.tflite`,
{ shell: true }
)
await command(
`cd ${tmpDir} && ${downloader} https://github.com/coqui-ai/STT-models/releases/download/english/coqui/v${coquiModelVersion}-huge-vocab/huge-vocabulary.scorer`,
{ shell: true }
)
log.success('Pre-trained model download done')
log.info('Moving...')
await command(
`mv -f ${tmpDir}/model.tflite ${destCoquiFolder}/model.tflite`,
{ shell: true }
)
await command(
`mv -f ${tmpDir}/huge-vocabulary.scorer ${destCoquiFolder}/huge-vocabulary.scorer`,
{ shell: true }
)
log.success('Move done')
log.success('Offline speech-to-text installed')
resolve()
} catch (e) {
log.error(`Failed to install offline speech-to-text: ${e}`)
reject(e)
}
} else {
log.success('Offline speech-to-text is already installed')
resolve()
}
})

View File

@ -7,46 +7,61 @@ import os from '@/helpers/os'
/**
* Setup offline text-to-speech
*/
export default () => new Promise(async (resolve, reject) => {
log.info('Setting up offline text-to-speech...')
export default () =>
new Promise(async (resolve, reject) => {
log.info('Setting up offline text-to-speech...')
const destFliteFolder = 'bin/flite'
const tmpDir = 'scripts/tmp'
let makeCores = ''
if (os.cpus().length > 2) {
makeCores = `-j ${os.cpus().length - 2}`
}
let downloader = 'wget'
if (os.get().type === 'macos') {
downloader = 'curl -L -O'
}
if (!fs.existsSync(`${destFliteFolder}/flite`)) {
try {
log.info('Downloading run-time synthesis engine...')
await command(`cd ${tmpDir} && ${downloader} http://ports.ubuntu.com/pool/universe/f/flite/flite_2.1-release.orig.tar.bz2`, { shell: true })
log.success('Run-time synthesis engine download done')
log.info('Unpacking...')
await command(`cd ${tmpDir} && tar xfvj flite_2.1-release.orig.tar.bz2 && cp ../assets/leon.lv flite-2.1-release/config`, { shell: true })
log.success('Unpack done')
log.info('Configuring...')
await command(`cd ${tmpDir}/flite-2.1-release && ./configure --with-langvox=leon`, { shell: true })
log.success('Configure done')
log.info('Building...')
await command(`cd ${tmpDir}/flite-2.1-release && make ${makeCores}`, { shell: true })
log.success('Build done')
log.info('Cleaning...')
await command(`cp -f ${tmpDir}/flite-2.1-release/bin/flite ${destFliteFolder} && rm -rf ${tmpDir}/flite-2.1-release*`, { shell: true })
log.success('Clean done')
log.success('Offline text-to-speech installed')
resolve()
} catch (e) {
log.error(`Failed to install offline text-to-speech: ${e}`)
reject(e)
const destFliteFolder = 'bin/flite'
const tmpDir = 'scripts/tmp'
let makeCores = ''
if (os.cpus().length > 2) {
makeCores = `-j ${os.cpus().length - 2}`
}
} else {
log.success('Offline text-to-speech is already installed')
resolve()
}
})
let downloader = 'wget'
if (os.get().type === 'macos') {
downloader = 'curl -L -O'
}
if (!fs.existsSync(`${destFliteFolder}/flite`)) {
try {
log.info('Downloading run-time synthesis engine...')
await command(
`cd ${tmpDir} && ${downloader} http://ports.ubuntu.com/pool/universe/f/flite/flite_2.1-release.orig.tar.bz2`,
{ shell: true }
)
log.success('Run-time synthesis engine download done')
log.info('Unpacking...')
await command(
`cd ${tmpDir} && tar xfvj flite_2.1-release.orig.tar.bz2 && cp ../assets/leon.lv flite-2.1-release/config`,
{ shell: true }
)
log.success('Unpack done')
log.info('Configuring...')
await command(
`cd ${tmpDir}/flite-2.1-release && ./configure --with-langvox=leon`,
{ shell: true }
)
log.success('Configure done')
log.info('Building...')
await command(`cd ${tmpDir}/flite-2.1-release && make ${makeCores}`, {
shell: true
})
log.success('Build done')
log.info('Cleaning...')
await command(
`cp -f ${tmpDir}/flite-2.1-release/bin/flite ${destFliteFolder} && rm -rf ${tmpDir}/flite-2.1-release*`,
{ shell: true }
)
log.success('Clean done')
log.success('Offline text-to-speech installed')
resolve()
} catch (e) {
log.error(`Failed to install offline text-to-speech: ${e}`)
reject(e)
}
} else {
log.success('Offline text-to-speech is already installed')
resolve()
}
})

View File

@ -1 +1 @@
console.info('\x1b[36m➡ %s\x1b[0m', 'Running Leon\'s installation...')
console.info('\x1b[36m➡ %s\x1b[0m', "Running Leon's installation...")

View File

@ -6,33 +6,40 @@ import log from '@/helpers/log'
/**
* Setup Leon's core configuration
*/
export default () => new Promise((resolve) => {
log.info('Configuring core...')
export default () =>
new Promise((resolve) => {
log.info('Configuring core...')
const dir = 'core/config'
const list = (dir) => {
const entities = fs.readdirSync(dir)
const dir = 'core/config'
const list = (dir) => {
const entities = fs.readdirSync(dir)
// Browse core config entities
for (let i = 0; i < entities.length; i += 1) {
const file = `${entities[i].replace('.sample.json', '.json')}`
// Recursive if the entity is a directory
const way = path.join(dir, entities[i])
if (fs.statSync(way).isDirectory()) {
list(way)
} else if (entities[i].indexOf('.sample.json') !== -1
&& !fs.existsSync(`${dir}/${file}`)) { // Clone config from sample in case there is no existing config file
fs.createReadStream(`${dir}/${entities[i]}`)
.pipe(fs.createWriteStream(`${dir}/${file}`))
// Browse core config entities
for (let i = 0; i < entities.length; i += 1) {
const file = `${entities[i].replace('.sample.json', '.json')}`
// Recursive if the entity is a directory
const way = path.join(dir, entities[i])
if (fs.statSync(way).isDirectory()) {
list(way)
} else if (
entities[i].indexOf('.sample.json') !== -1 &&
!fs.existsSync(`${dir}/${file}`)
) {
// Clone config from sample in case there is no existing config file
fs.createReadStream(`${dir}/${entities[i]}`).pipe(
fs.createWriteStream(`${dir}/${file}`)
)
log.success(`${file} file created`)
} else if (entities[i].indexOf('.sample.json') !== -1
&& fs.existsSync(`${dir}/${file}`)) {
log.success(`${file} already exists`)
log.success(`${file} file created`)
} else if (
entities[i].indexOf('.sample.json') !== -1 &&
fs.existsSync(`${dir}/${file}`)
) {
log.success(`${file} already exists`)
}
}
}
}
list(dir)
resolve()
})
list(dir)
resolve()
})

View File

@ -6,34 +6,34 @@ import log from '@/helpers/log'
/**
* Duplicate the .env.sample to .env file
*/
export default () => new Promise(async (resolve) => {
log.info('.env file creation...')
export default () =>
new Promise(async (resolve) => {
log.info('.env file creation...')
const createDotenv = () => {
fs.createReadStream('.env.sample')
.pipe(fs.createWriteStream('.env'))
const createDotenv = () => {
fs.createReadStream('.env.sample').pipe(fs.createWriteStream('.env'))
log.success('.env file created')
}
if (!fs.existsSync('.env')) {
createDotenv()
resolve()
} else if (process.env.IS_DOCKER === 'true') {
resolve()
} else {
const answer = await prompt({
type: 'confirm',
name: 'dotenv.overwrite',
message: '.env file already exists, overwrite:',
default: false
})
if (answer.dotenv.overwrite === true) {
createDotenv()
log.success('.env file created')
}
resolve()
}
})
if (!fs.existsSync('.env')) {
createDotenv()
resolve()
} else if (process.env.IS_DOCKER === 'true') {
resolve()
} else {
const answer = await prompt({
type: 'confirm',
name: 'dotenv.overwrite',
message: '.env file already exists, overwrite:',
default: false
})
if (answer.dotenv.overwrite === true) {
createDotenv()
}
resolve()
}
})

View File

@ -7,76 +7,94 @@ import log from '@/helpers/log'
/**
* Download and setup Leon's Python packages dependencies
*/
export default () => new Promise(async (resolve, reject) => {
log.info('Checking Python env...')
export default () =>
new Promise(async (resolve, reject) => {
log.info('Checking Python env...')
// Check if the Pipfile exists
if (fs.existsSync('bridges/python/Pipfile')) {
log.success('bridges/python/Pipfile found')
// Check if the Pipfile exists
if (fs.existsSync('bridges/python/Pipfile')) {
log.success('bridges/python/Pipfile found')
try {
// Check if Pipenv is installed
const pipenvVersionChild = await command('pipenv --version', { shell: true })
let pipenvVersion = pipenvVersionChild.stdout
try {
// Check if Pipenv is installed
const pipenvVersionChild = await command('pipenv --version', {
shell: true
})
let pipenvVersion = pipenvVersionChild.stdout
if (pipenvVersion.indexOf('version') !== -1) {
pipenvVersion = pipenvVersion.substr(pipenvVersion.indexOf('version') + 'version '.length)
pipenvVersion = `${pipenvVersion} version`
}
log.success(`Pipenv ${pipenvVersion} found`)
} catch (e) {
log.error(`${e}\nPlease install Pipenv: "pip install pipenv" or read the documentation https://docs.pipenv.org`)
reject(e)
}
try {
const dotVenvPath = path.join(process.cwd(), 'bridges/python/.venv')
const pipfilePath = path.join(process.cwd(), 'bridges/python/Pipfile')
const pipfileMtime = fs.statSync(pipfilePath).mtime
const isDotVenvExist = fs.existsSync(dotVenvPath)
const installPythonPackages = async () => {
// Installing Python packages
log.info('Installing Python packages from bridges/python/Pipfile...')
await command('pipenv install --site-packages', { shell: true })
log.success('Python packages installed')
log.info('Installing spaCy models...')
// Find new spaCy models: https://github.com/explosion/spacy-models/releases
await Promise.all([
command('pipenv run spacy download en_core_web_trf-3.4.0 --direct', { shell: true }),
command('pipenv run spacy download fr_core_news_md-3.4.0 --direct', { shell: true })
])
log.success('spaCy models installed')
}
if (!isDotVenvExist) {
await installPythonPackages()
} else {
const dotProjectPath = path.join(process.cwd(), 'bridges/python/.venv/.project')
if (fs.existsSync(dotProjectPath)) {
const dotProjectMtime = fs.statSync(dotProjectPath).mtime
// Check if Python deps tree has been modified since the initial setup
if (pipfileMtime > dotProjectMtime) {
await installPythonPackages()
} else {
log.success('Python packages are up-to-date')
}
} else {
await installPythonPackages()
if (pipenvVersion.indexOf('version') !== -1) {
pipenvVersion = pipenvVersion.substr(
pipenvVersion.indexOf('version') + 'version '.length
)
pipenvVersion = `${pipenvVersion} version`
}
log.success(`Pipenv ${pipenvVersion} found`)
} catch (e) {
log.error(
`${e}\nPlease install Pipenv: "pip install pipenv" or read the documentation https://docs.pipenv.org`
)
reject(e)
}
resolve()
} catch (e) {
log.error(`Failed to install the Python packages: ${e}`)
reject(e)
try {
const dotVenvPath = path.join(process.cwd(), 'bridges/python/.venv')
const pipfilePath = path.join(process.cwd(), 'bridges/python/Pipfile')
const pipfileMtime = fs.statSync(pipfilePath).mtime
const isDotVenvExist = fs.existsSync(dotVenvPath)
const installPythonPackages = async () => {
// Installing Python packages
log.info('Installing Python packages from bridges/python/Pipfile...')
await command('pipenv install --site-packages', { shell: true })
log.success('Python packages installed')
log.info('Installing spaCy models...')
// Find new spaCy models: https://github.com/explosion/spacy-models/releases
await Promise.all([
command(
'pipenv run spacy download en_core_web_trf-3.4.0 --direct',
{ shell: true }
),
command(
'pipenv run spacy download fr_core_news_md-3.4.0 --direct',
{ shell: true }
)
])
log.success('spaCy models installed')
}
if (!isDotVenvExist) {
await installPythonPackages()
} else {
const dotProjectPath = path.join(
process.cwd(),
'bridges/python/.venv/.project'
)
if (fs.existsSync(dotProjectPath)) {
const dotProjectMtime = fs.statSync(dotProjectPath).mtime
// Check if Python deps tree has been modified since the initial setup
if (pipfileMtime > dotProjectMtime) {
await installPythonPackages()
} else {
log.success('Python packages are up-to-date')
}
} else {
await installPythonPackages()
}
}
resolve()
} catch (e) {
log.error(`Failed to install the Python packages: ${e}`)
reject(e)
}
} else {
log.error(
'bridges/python/Pipfile does not exist. Try to pull the project (git pull)'
)
reject()
}
} else {
log.error('bridges/python/Pipfile does not exist. Try to pull the project (git pull)')
reject()
}
})
})

View File

@ -8,72 +8,98 @@ import domain from '@/helpers/domain'
/**
* Setup skills configuration
*/
export default () => new Promise(async (resolve, reject) => {
log.info('Setting up skills configuration...')
export default () =>
new Promise(async (resolve, reject) => {
log.info('Setting up skills configuration...')
const [domainKeys, domains] = await Promise.all([domain.list(), domain.getDomainsObj()])
const [domainKeys, domains] = await Promise.all([
domain.list(),
domain.getDomainsObj()
])
for (let i = 0; i < domainKeys.length; i += 1) {
const currentDomain = domains[domainKeys[i]]
const skillKeys = Object.keys(currentDomain.skills)
for (let i = 0; i < domainKeys.length; i += 1) {
const currentDomain = domains[domainKeys[i]]
const skillKeys = Object.keys(currentDomain.skills)
// Browse skills
for (let j = 0; j < skillKeys.length; j += 1) {
const skillFriendlyName = skillKeys[j]
const currentSkill = currentDomain.skills[skillFriendlyName]
const configDir = path.join(currentSkill.path, 'src')
const configFile = path.join(configDir, 'config.json')
const configSampleFile = path.join(configDir, 'config.sample.json')
// Browse skills
for (let j = 0; j < skillKeys.length; j += 1) {
const skillFriendlyName = skillKeys[j]
const currentSkill = currentDomain.skills[skillFriendlyName]
const configDir = path.join(currentSkill.path, 'src')
const configFile = path.join(configDir, 'config.json')
const configSampleFile = path.join(configDir, 'config.sample.json')
// If there is a bridge set from the skill config
if (currentSkill.bridge) {
// Check if the config and config.sample file exist
if (fs.existsSync(configFile) && fs.existsSync(configSampleFile)) {
const config = JSON.parse(fs.readFileSync(configFile, 'utf8'))?.configurations
const configSample = JSON.parse(fs.readFileSync(configSampleFile, 'utf8'))?.configurations
const configKeys = Object.keys(config)
const configSampleKeys = Object.keys(configSample)
// If there is a bridge set from the skill config
if (currentSkill.bridge) {
// Check if the config and config.sample file exist
if (fs.existsSync(configFile) && fs.existsSync(configSampleFile)) {
const config = JSON.parse(
fs.readFileSync(configFile, 'utf8')
)?.configurations
const configSample = JSON.parse(
fs.readFileSync(configSampleFile, 'utf8')
)?.configurations
const configKeys = Object.keys(config)
const configSampleKeys = Object.keys(configSample)
// Check if there is a new config key in the config sample compared to the config.json
if (JSON.stringify(configKeys) !== JSON.stringify(configSampleKeys)) {
// Browse config keys of the new skill config
for (let j = 0; j < configSampleKeys.length; j += 1) {
// Check if the current config key does not exist
if (configKeys.includes(configSampleKeys[j]) === false) {
log.info(`Adding new configuration key "${configSampleKeys[j]}" for the ${skillFriendlyName} skill...`)
// Check if there is a new config key in the config sample compared to the config.json
if (
JSON.stringify(configKeys) !== JSON.stringify(configSampleKeys)
) {
// Browse config keys of the new skill config
for (let j = 0; j < configSampleKeys.length; j += 1) {
// Check if the current config key does not exist
if (configKeys.includes(configSampleKeys[j]) === false) {
log.info(
`Adding new configuration key "${configSampleKeys[j]}" for the ${skillFriendlyName} skill...`
)
// Prepare to inject the new config key object
const configKey = {
[configSampleKeys[j]]: configSample[configSampleKeys[j]]
}
// Prepare to inject the new config key object
const configKey = {
[configSampleKeys[j]]: configSample[configSampleKeys[j]]
}
try {
// Add new skill configuration in the config.json file
commandSync(`json -I -f ${configFile} -e 'this.configurations.${configSampleKeys[j]}=${JSON.stringify(configKey[configSampleKeys[j]])}'`, { shell: true })
log.success(`"${configSampleKeys[j]}" configuration key added to ${configFile}`)
} catch (e) {
log.error(`Error while adding "${configSampleKeys[j]}" configuration key to ${configFile}: ${e}`)
reject()
try {
// Add new skill configuration in the config.json file
commandSync(
`json -I -f ${configFile} -e 'this.configurations.${
configSampleKeys[j]
}=${JSON.stringify(configKey[configSampleKeys[j]])}'`,
{ shell: true }
)
log.success(
`"${configSampleKeys[j]}" configuration key added to ${configFile}`
)
} catch (e) {
log.error(
`Error while adding "${configSampleKeys[j]}" configuration key to ${configFile}: ${e}`
)
reject()
}
}
}
}
}
} else if (!fs.existsSync(configSampleFile)) {
// Stop the setup if the config.sample.json of the current skill does not exist
log.error(`The "${skillFriendlyName}" skill configuration file does not exist. Try to pull the project (git pull)`)
reject()
} else {
// Duplicate config.sample.json of the current skill to config.json
fs.createReadStream(configSampleFile)
.pipe(fs.createWriteStream(`${configDir}/config.json`))
} else if (!fs.existsSync(configSampleFile)) {
// Stop the setup if the config.sample.json of the current skill does not exist
log.error(
`The "${skillFriendlyName}" skill configuration file does not exist. Try to pull the project (git pull)`
)
reject()
} else {
// Duplicate config.sample.json of the current skill to config.json
fs.createReadStream(configSampleFile).pipe(
fs.createWriteStream(`${configDir}/config.json`)
)
log.success(`"${skillFriendlyName}" skill configuration file created`)
resolve()
log.success(
`"${skillFriendlyName}" skill configuration file created`
)
resolve()
}
}
}
}
}
log.success('Skills configured')
resolve()
})
log.success('Skills configured')
resolve()
})

View File

@ -13,7 +13,7 @@ import setupPythonPackages from './setup-python-packages'
/**
* Main entry to setup Leon
*/
(async () => {
;(async () => {
try {
// Required env vars to setup
process.env.PIPENV_PIPFILE = 'bridges/python/Pipfile'
@ -21,10 +21,7 @@ import setupPythonPackages from './setup-python-packages'
await setupDotenv()
loader.start()
await Promise.all([
setupCore(),
setupSkillsConfig()
])
await Promise.all([setupCore(), setupSkillsConfig()])
await setupPythonPackages()
loader.stop()
await generateHttpApiKey()

View File

@ -8,7 +8,7 @@ import loader from '@/helpers/loader'
*
* npm run test:module videodownloader:youtube
*/
(async () => {
;(async () => {
const { argv } = process
const s = argv[2].toLowerCase()
const arr = s.split(':')
@ -17,7 +17,10 @@ import loader from '@/helpers/loader'
try {
loader.start()
await command('npm run train en', { shell: true })
const cmd = await command(`cross-env PIPENV_PIPFILE=bridges/python/Pipfile LEON_NODE_ENV=testing jest --silent --config=./test/e2e/modules/e2e.modules.jest.json packages/${pkg}/test/${module}.spec.js && npm run train`, { shell: true })
const cmd = await command(
`cross-env PIPENV_PIPFILE=bridges/python/Pipfile LEON_NODE_ENV=testing jest --silent --config=./test/e2e/modules/e2e.modules.jest.json packages/${pkg}/test/${module}.spec.js && npm run train`,
{ shell: true }
)
log.default(cmd.stdout)
log.default(cmd.stderr)

View File

@ -5,7 +5,7 @@ import train from './train'
/**
* Execute the training scripts
*/
(async () => {
;(async () => {
try {
await train()
} catch (e) {

View File

@ -7,34 +7,43 @@ import log from '@/helpers/log'
* Train global entities
* Add global entities annotations (@...)
*/
export default (lang, nlp) => new Promise((resolve) => {
log.title('Global entities training')
export default (lang, nlp) =>
new Promise((resolve) => {
log.title('Global entities training')
const globalEntitiesPath = path.join(process.cwd(), 'core/data', lang, 'global-entities')
const globalEntityFiles = fs.readdirSync(globalEntitiesPath)
const newEntitiesObj = { }
const globalEntitiesPath = path.join(
process.cwd(),
'core/data',
lang,
'global-entities'
)
const globalEntityFiles = fs.readdirSync(globalEntitiesPath)
const newEntitiesObj = {}
for (let i = 0; i < globalEntityFiles.length; i += 1) {
const globalEntityFileName = globalEntityFiles[i]
const [entityName] = globalEntityFileName.split('.')
const globalEntityPath = path.join(globalEntitiesPath, globalEntityFileName)
const { options } = JSON.parse(fs.readFileSync(globalEntityPath, 'utf8'))
const optionKeys = Object.keys(options)
const optionsObj = { }
for (let i = 0; i < globalEntityFiles.length; i += 1) {
const globalEntityFileName = globalEntityFiles[i]
const [entityName] = globalEntityFileName.split('.')
const globalEntityPath = path.join(
globalEntitiesPath,
globalEntityFileName
)
const { options } = JSON.parse(fs.readFileSync(globalEntityPath, 'utf8'))
const optionKeys = Object.keys(options)
const optionsObj = {}
log.info(`[${lang}] Adding "${entityName}" global entity...`)
log.info(`[${lang}] Adding "${entityName}" global entity...`)
optionKeys.forEach((optionKey) => {
const { synonyms } = options[optionKey]
optionKeys.forEach((optionKey) => {
const { synonyms } = options[optionKey]
optionsObj[optionKey] = synonyms
})
optionsObj[optionKey] = synonyms
})
newEntitiesObj[entityName] = { options: optionsObj }
log.success(`[${lang}] "${entityName}" global entity added`)
}
newEntitiesObj[entityName] = { options: optionsObj }
log.success(`[${lang}] "${entityName}" global entity added`)
}
nlp.addEntities(newEntitiesObj, lang)
nlp.addEntities(newEntitiesObj, lang)
resolve()
})
resolve()
})

View File

@ -10,105 +10,124 @@ import domain from '@/helpers/domain'
/**
* Train skills actions
*/
export default (lang, nlp) => new Promise(async (resolve) => {
log.title('Skills actions training')
export default (lang, nlp) =>
new Promise(async (resolve) => {
log.title('Skills actions training')
const supportedActionTypes = ['dialog', 'logic']
const [domainKeys, domains] = await Promise.all([domain.list(), domain.getDomainsObj()])
const supportedActionTypes = ['dialog', 'logic']
const [domainKeys, domains] = await Promise.all([
domain.list(),
domain.getDomainsObj()
])
for (let i = 0; i < domainKeys.length; i += 1) {
const currentDomain = domains[domainKeys[i]]
const skillKeys = Object.keys(currentDomain.skills)
for (let i = 0; i < domainKeys.length; i += 1) {
const currentDomain = domains[domainKeys[i]]
const skillKeys = Object.keys(currentDomain.skills)
log.info(`[${lang}] Training "${domainKeys[i]}" domain model...`)
log.info(`[${lang}] Training "${domainKeys[i]}" domain model...`)
for (let j = 0; j < skillKeys.length; j += 1) {
const { name: skillName } = currentDomain.skills[skillKeys[j]]
const currentSkill = currentDomain.skills[skillKeys[j]]
for (let j = 0; j < skillKeys.length; j += 1) {
const { name: skillName } = currentDomain.skills[skillKeys[j]]
const currentSkill = currentDomain.skills[skillKeys[j]]
log.info(`[${lang}] Using "${skillKeys[j]}" skill config data`)
log.info(`[${lang}] Using "${skillKeys[j]}" skill config data`)
const configFilePath = path.join(currentSkill.path, 'config', `${lang}.json`)
const configFilePath = path.join(
currentSkill.path,
'config',
`${lang}.json`
)
if (fs.existsSync(configFilePath)) {
const {
actions,
variables
} = await json.loadConfigData(configFilePath, lang) // eslint-disable-line no-await-in-loop
const actionsKeys = Object.keys(actions)
if (fs.existsSync(configFilePath)) {
const { actions, variables } = await json.loadConfigData(
configFilePath,
lang
) // eslint-disable-line no-await-in-loop
const actionsKeys = Object.keys(actions)
for (let k = 0; k < actionsKeys.length; k += 1) {
const actionName = actionsKeys[k]
const actionObj = actions[actionName]
const intent = `${skillName}.${actionName}`
const { utterance_samples: utteranceSamples, answers, slots } = actionObj
for (let k = 0; k < actionsKeys.length; k += 1) {
const actionName = actionsKeys[k]
const actionObj = actions[actionName]
const intent = `${skillName}.${actionName}`
const {
utterance_samples: utteranceSamples,
answers,
slots
} = actionObj
if (!actionObj.type || !supportedActionTypes.includes(actionObj.type)) {
log.error(`This action type isn't supported: ${actionObj.type}`)
process.exit(1)
}
if (
!actionObj.type ||
!supportedActionTypes.includes(actionObj.type)
) {
log.error(`This action type isn't supported: ${actionObj.type}`)
process.exit(1)
}
nlp.assignDomain(lang, intent, currentDomain.name)
nlp.assignDomain(lang, intent, currentDomain.name)
if (slots) {
for (let l = 0; l < slots.length; l += 1) {
const slotObj = slots[l]
if (slots) {
for (let l = 0; l < slots.length; l += 1) {
const slotObj = slots[l]
/**
* TODO: handle entity within questions such as "Where does {{ hero }} live?"
* https://github.com/axa-group/nlp.js/issues/328
* https://github.com/axa-group/nlp.js/issues/291
* https://github.com/axa-group/nlp.js/issues/307
*/
if (slotObj.item.type === 'entity') {
nlp.slotManager
.addSlot(intent, `${slotObj.name}#${slotObj.item.name}`, true, { [lang]: slotObj.questions })
}
/* nlp.slotManager
/**
* TODO: handle entity within questions such as "Where does {{ hero }} live?"
* https://github.com/axa-group/nlp.js/issues/328
* https://github.com/axa-group/nlp.js/issues/291
* https://github.com/axa-group/nlp.js/issues/307
*/
if (slotObj.item.type === 'entity') {
nlp.slotManager.addSlot(
intent,
`${slotObj.name}#${slotObj.item.name}`,
true,
{ [lang]: slotObj.questions }
)
}
/* nlp.slotManager
.addSlot(intent, 'boolean', true, { [lang]: 'How many players?' }) */
}
}
for (let l = 0; l < utteranceSamples?.length; l += 1) {
const utterance = utteranceSamples[l]
// Achieve Cartesian training
const utteranceAlternatives = composeFromPattern(utterance)
utteranceAlternatives.forEach((utteranceAlternative) => {
nlp.addDocument(lang, utteranceAlternative, intent)
})
}
// Train NLG if the action has a dialog type
if (actionObj.type === 'dialog') {
const variablesObj = { }
// Dynamic variables binding if any variable is declared
if (variables) {
const variableKeys = Object.keys(variables)
for (let l = 0; l < variableKeys.length; l += 1) {
const key = variableKeys[l]
variablesObj[`%${key}%`] = variables[variableKeys[l]]
}
}
for (let l = 0; l < answers?.length; l += 1) {
const variableKeys = Object.keys(variablesObj)
if (variableKeys.length > 0) {
answers[l] = string.pnr(answers[l], variablesObj)
for (let l = 0; l < utteranceSamples?.length; l += 1) {
const utterance = utteranceSamples[l]
// Achieve Cartesian training
const utteranceAlternatives = composeFromPattern(utterance)
utteranceAlternatives.forEach((utteranceAlternative) => {
nlp.addDocument(lang, utteranceAlternative, intent)
})
}
// Train NLG if the action has a dialog type
if (actionObj.type === 'dialog') {
const variablesObj = {}
// Dynamic variables binding if any variable is declared
if (variables) {
const variableKeys = Object.keys(variables)
for (let l = 0; l < variableKeys.length; l += 1) {
const key = variableKeys[l]
variablesObj[`%${key}%`] = variables[variableKeys[l]]
}
}
nlp.addAnswer(lang, `${skillName}.${actionName}`, answers[l])
for (let l = 0; l < answers?.length; l += 1) {
const variableKeys = Object.keys(variablesObj)
if (variableKeys.length > 0) {
answers[l] = string.pnr(answers[l], variablesObj)
}
nlp.addAnswer(lang, `${skillName}.${actionName}`, answers[l])
}
}
}
}
}
log.success(`[${lang}] "${domainKeys[i]}" domain trained`)
}
log.success(`[${lang}] "${domainKeys[i]}" domain trained`)
}
resolve()
})
resolve()
})

View File

@ -7,40 +7,48 @@ import log from '@/helpers/log'
/**
* Train global resolvers
*/
export default (lang, nlp) => new Promise((resolve) => {
log.title('Global resolvers training')
export default (lang, nlp) =>
new Promise((resolve) => {
log.title('Global resolvers training')
const resolversPath = path.join(process.cwd(), 'core/data', lang, 'global-resolvers')
const resolverFiles = fs.readdirSync(resolversPath)
const resolversPath = path.join(
process.cwd(),
'core/data',
lang,
'global-resolvers'
)
const resolverFiles = fs.readdirSync(resolversPath)
for (let i = 0; i < resolverFiles.length; i += 1) {
const resolverFileName = resolverFiles[i]
const resolverPath = path.join(resolversPath, resolverFileName)
const { name: resolverName, intents: resolverIntents } = JSON.parse(fs.readFileSync(resolverPath, 'utf8'))
const intentKeys = Object.keys(resolverIntents)
for (let i = 0; i < resolverFiles.length; i += 1) {
const resolverFileName = resolverFiles[i]
const resolverPath = path.join(resolversPath, resolverFileName)
const { name: resolverName, intents: resolverIntents } = JSON.parse(
fs.readFileSync(resolverPath, 'utf8')
)
const intentKeys = Object.keys(resolverIntents)
log.info(`[${lang}] Training "${resolverName}" resolver...`)
log.info(`[${lang}] Training "${resolverName}" resolver...`)
for (let j = 0; j < intentKeys.length; j += 1) {
const intentName = intentKeys[j]
const intent = `resolver.global.${resolverName}.${intentName}`
const intentObj = resolverIntents[intentName]
for (let j = 0; j < intentKeys.length; j += 1) {
const intentName = intentKeys[j]
const intent = `resolver.global.${resolverName}.${intentName}`
const intentObj = resolverIntents[intentName]
nlp.assignDomain(lang, intent, 'system')
nlp.assignDomain(lang, intent, 'system')
for (let k = 0; k < intentObj.utterance_samples.length; k += 1) {
const utteranceSample = intentObj.utterance_samples[k]
// Achieve Cartesian training
const utteranceAlternatives = composeFromPattern(utteranceSample)
for (let k = 0; k < intentObj.utterance_samples.length; k += 1) {
const utteranceSample = intentObj.utterance_samples[k]
// Achieve Cartesian training
const utteranceAlternatives = composeFromPattern(utteranceSample)
utteranceAlternatives.forEach((utteranceAlternative) => {
nlp.addDocument(lang, utteranceAlternative, intent)
})
utteranceAlternatives.forEach((utteranceAlternative) => {
nlp.addDocument(lang, utteranceAlternative, intent)
})
}
}
log.success(`[${lang}] "${resolverName}" resolver trained`)
}
log.success(`[${lang}] "${resolverName}" resolver trained`)
}
resolve()
})
resolve()
})

View File

@ -9,53 +9,66 @@ import json from '@/helpers/json'
/**
* Train skills resolvers
*/
export default (lang, nlp) => new Promise(async (resolve) => {
log.title('Skills resolvers training')
export default (lang, nlp) =>
new Promise(async (resolve) => {
log.title('Skills resolvers training')
const [domainKeys, domains] = await Promise.all([domain.list(), domain.getDomainsObj()])
const [domainKeys, domains] = await Promise.all([
domain.list(),
domain.getDomainsObj()
])
domainKeys.forEach((domainName) => {
const currentDomain = domains[domainName]
const skillKeys = Object.keys(currentDomain.skills)
domainKeys.forEach((domainName) => {
const currentDomain = domains[domainName]
const skillKeys = Object.keys(currentDomain.skills)
skillKeys.forEach(async (skillName) => {
const currentSkill = currentDomain.skills[skillName]
const configFilePath = path.join(currentSkill.path, 'config', `${lang}.json`)
skillKeys.forEach(async (skillName) => {
const currentSkill = currentDomain.skills[skillName]
const configFilePath = path.join(
currentSkill.path,
'config',
`${lang}.json`
)
if (fs.existsSync(configFilePath)) {
const { resolvers } = await json.loadConfigData(configFilePath, lang)
if (fs.existsSync(configFilePath)) {
const { resolvers } = await json.loadConfigData(configFilePath, lang)
if (resolvers) {
const resolversKeys = Object.keys(resolvers)
if (resolvers) {
const resolversKeys = Object.keys(resolvers)
resolversKeys.forEach((resolverName) => {
const resolver = resolvers[resolverName]
const intentKeys = Object.keys(resolver.intents)
resolversKeys.forEach((resolverName) => {
const resolver = resolvers[resolverName]
const intentKeys = Object.keys(resolver.intents)
log.info(`[${lang}] Training ${skillName} "${resolverName}" resolver...`)
log.info(
`[${lang}] Training ${skillName} "${resolverName}" resolver...`
)
intentKeys.forEach((intentName) => {
const intent = `resolver.${currentSkill.name}.${resolverName}.${intentName}`
const intentObj = resolver.intents[intentName]
intentKeys.forEach((intentName) => {
const intent = `resolver.${currentSkill.name}.${resolverName}.${intentName}`
const intentObj = resolver.intents[intentName]
nlp.assignDomain(lang, intent, currentDomain.name)
nlp.assignDomain(lang, intent, currentDomain.name)
intentObj.utterance_samples.forEach((utteranceSample) => {
// Achieve Cartesian training
const utteranceAlternatives = composeFromPattern(utteranceSample)
intentObj.utterance_samples.forEach((utteranceSample) => {
// Achieve Cartesian training
const utteranceAlternatives =
composeFromPattern(utteranceSample)
utteranceAlternatives.forEach((utteranceAlternative) => {
nlp.addDocument(lang, utteranceAlternative, intent)
utteranceAlternatives.forEach((utteranceAlternative) => {
nlp.addDocument(lang, utteranceAlternative, intent)
})
})
})
log.success(
`[${lang}] ${skillName} "${resolverName}" resolver trained`
)
})
log.success(`[${lang}] ${skillName} "${resolverName}" resolver trained`)
})
}
}
}
})
})
})
resolve()
})
resolve()
})

View File

@ -17,118 +17,127 @@ dotenv.config()
*
* npm run train [en or fr]
*/
export default () => new Promise(async (resolve, reject) => {
const globalResolversModelFileName = 'core/data/models/leon-global-resolvers-model.nlp'
const skillsResolversModelFileName = 'core/data/models/leon-skills-resolvers-model.nlp'
const mainModelFileName = 'core/data/models/leon-main-model.nlp'
try {
/**
* Global resolvers NLP model configuration
*/
const globalResolversContainer = await containerBootstrap()
globalResolversContainer.use(Nlp)
globalResolversContainer.use(LangAll)
const globalResolversNlp = globalResolversContainer.get('nlp')
const globalResolversNluManager = globalResolversContainer.get('nlu-manager')
globalResolversNluManager.settings.log = false
globalResolversNluManager.settings.trainByDomain = false
globalResolversNlp.settings.modelFileName = globalResolversModelFileName
globalResolversNlp.settings.threshold = 0.8
/**
* Skills resolvers NLP model configuration
*/
const skillsResolversContainer = await containerBootstrap()
skillsResolversContainer.use(Nlp)
skillsResolversContainer.use(LangAll)
const skillsResolversNlp = skillsResolversContainer.get('nlp')
const skillsResolversNluManager = skillsResolversContainer.get('nlu-manager')
skillsResolversNluManager.settings.log = false
skillsResolversNluManager.settings.trainByDomain = true
skillsResolversNlp.settings.modelFileName = skillsResolversModelFileName
skillsResolversNlp.settings.threshold = 0.8
/**
* Main NLP model configuration
*/
const mainContainer = await containerBootstrap()
mainContainer.use(Nlp)
mainContainer.use(LangAll)
const mainNlp = mainContainer.get('nlp')
const mainNluManager = mainContainer.get('nlu-manager')
// const mainSlotManager = container.get('SlotManager')
mainNluManager.settings.log = false
mainNluManager.settings.trainByDomain = true
// mainSlotManager.settings.
mainNlp.settings.forceNER = true // https://github.com/axa-group/nlp.js/blob/master/examples/17-ner-nlg/index.js
// mainNlp.settings.nlu = { useNoneFeature: true }
mainNlp.settings.calculateSentiment = true
mainNlp.settings.modelFileName = mainModelFileName
mainNlp.settings.threshold = 0.8
/**
* Training phases
*/
const shortLangs = lang.getShortLangs()
for (let h = 0; h < shortLangs.length; h += 1) {
const lang = shortLangs[h]
globalResolversNlp.addLanguage(lang)
// eslint-disable-next-line no-await-in-loop
await trainGlobalResolvers(lang, globalResolversNlp)
skillsResolversNlp.addLanguage(lang)
// eslint-disable-next-line no-await-in-loop
await trainSkillsResolvers(lang, skillsResolversNlp)
mainNlp.addLanguage(lang)
// eslint-disable-next-line no-await-in-loop
await trainGlobalEntities(lang, mainNlp)
// eslint-disable-next-line no-await-in-loop
await trainSkillsActions(lang, mainNlp)
}
export default () =>
new Promise(async (resolve, reject) => {
const globalResolversModelFileName =
'core/data/models/leon-global-resolvers-model.nlp'
const skillsResolversModelFileName =
'core/data/models/leon-skills-resolvers-model.nlp'
const mainModelFileName = 'core/data/models/leon-main-model.nlp'
try {
await globalResolversNlp.train()
/**
* Global resolvers NLP model configuration
*/
const globalResolversContainer = await containerBootstrap()
log.success(`Global resolvers NLP model saved in ${globalResolversModelFileName}`)
resolve()
globalResolversContainer.use(Nlp)
globalResolversContainer.use(LangAll)
const globalResolversNlp = globalResolversContainer.get('nlp')
const globalResolversNluManager =
globalResolversContainer.get('nlu-manager')
globalResolversNluManager.settings.log = false
globalResolversNluManager.settings.trainByDomain = false
globalResolversNlp.settings.modelFileName = globalResolversModelFileName
globalResolversNlp.settings.threshold = 0.8
/**
* Skills resolvers NLP model configuration
*/
const skillsResolversContainer = await containerBootstrap()
skillsResolversContainer.use(Nlp)
skillsResolversContainer.use(LangAll)
const skillsResolversNlp = skillsResolversContainer.get('nlp')
const skillsResolversNluManager =
skillsResolversContainer.get('nlu-manager')
skillsResolversNluManager.settings.log = false
skillsResolversNluManager.settings.trainByDomain = true
skillsResolversNlp.settings.modelFileName = skillsResolversModelFileName
skillsResolversNlp.settings.threshold = 0.8
/**
* Main NLP model configuration
*/
const mainContainer = await containerBootstrap()
mainContainer.use(Nlp)
mainContainer.use(LangAll)
const mainNlp = mainContainer.get('nlp')
const mainNluManager = mainContainer.get('nlu-manager')
// const mainSlotManager = container.get('SlotManager')
mainNluManager.settings.log = false
mainNluManager.settings.trainByDomain = true
// mainSlotManager.settings.
mainNlp.settings.forceNER = true // https://github.com/axa-group/nlp.js/blob/master/examples/17-ner-nlg/index.js
// mainNlp.settings.nlu = { useNoneFeature: true }
mainNlp.settings.calculateSentiment = true
mainNlp.settings.modelFileName = mainModelFileName
mainNlp.settings.threshold = 0.8
/**
* Training phases
*/
const shortLangs = lang.getShortLangs()
for (let h = 0; h < shortLangs.length; h += 1) {
const lang = shortLangs[h]
globalResolversNlp.addLanguage(lang)
// eslint-disable-next-line no-await-in-loop
await trainGlobalResolvers(lang, globalResolversNlp)
skillsResolversNlp.addLanguage(lang)
// eslint-disable-next-line no-await-in-loop
await trainSkillsResolvers(lang, skillsResolversNlp)
mainNlp.addLanguage(lang)
// eslint-disable-next-line no-await-in-loop
await trainGlobalEntities(lang, mainNlp)
// eslint-disable-next-line no-await-in-loop
await trainSkillsActions(lang, mainNlp)
}
try {
await globalResolversNlp.train()
log.success(
`Global resolvers NLP model saved in ${globalResolversModelFileName}`
)
resolve()
} catch (e) {
log.error(`Failed to save global resolvers NLP model: ${e}`)
reject()
}
try {
await skillsResolversNlp.train()
log.success(
`Skills resolvers NLP model saved in ${skillsResolversModelFileName}`
)
resolve()
} catch (e) {
log.error(`Failed to save skills resolvers NLP model: ${e}`)
reject()
}
try {
await mainNlp.train()
log.success(`Main NLP model saved in ${mainModelFileName}`)
resolve()
} catch (e) {
log.error(`Failed to save main NLP model: ${e}`)
reject()
}
} catch (e) {
log.error(`Failed to save global resolvers NLP model: ${e}`)
reject()
log.error(e.message)
reject(e)
}
try {
await skillsResolversNlp.train()
log.success(`Skills resolvers NLP model saved in ${skillsResolversModelFileName}`)
resolve()
} catch (e) {
log.error(`Failed to save skills resolvers NLP model: ${e}`)
reject()
}
try {
await mainNlp.train()
log.success(`Main NLP model saved in ${mainModelFileName}`)
resolve()
} catch (e) {
log.error(`Failed to save main NLP model: ${e}`)
reject()
}
} catch (e) {
log.error(e.message)
reject(e)
}
})
})

View File

@ -10,14 +10,14 @@ const audios = {
}
class Asr {
constructor () {
this.blob = { }
constructor() {
this.blob = {}
log.title('ASR')
log.success('New instance')
}
static get audios () {
static get audios() {
return audios
}
@ -25,10 +25,10 @@ class Asr {
* Encode audio blob to WAVE file
* and forward the WAVE file to the STT parser
*/
run (blob, stt) {
run(blob, stt) {
return new Promise((resolve, reject) => {
log.title('ASR')
this.blob = blob
fs.writeFile(audios.webm, Buffer.from(this.blob), 'binary', (err) => {
@ -44,7 +44,8 @@ class Asr {
* Encode WebM file to WAVE file
* ffmpeg -i speech.webm -acodec pcm_s16le -ar 16000 -ac 1 speech.wav
*/
ffmpeg.addInput(audios.webm)
ffmpeg
.addInput(audios.webm)
.on('start', () => {
log.info('Encoding WebM file to WAVE file...')
})
@ -52,7 +53,10 @@ class Asr {
log.success('Encoding done')
if (Object.keys(stt).length === 0) {
reject({ type: 'warning', obj: new Error('The speech recognition is not ready yet') })
reject({
type: 'warning',
obj: new Error('The speech recognition is not ready yet')
})
} else {
stt.parse(audios.wav)
resolve()

View File

@ -11,52 +11,62 @@ import domain from '@/helpers/domain'
import json from '@/helpers/json'
class Brain {
constructor () {
constructor() {
this._lang = 'en'
this.broca = JSON.parse(fs.readFileSync(path.join(process.cwd(), 'core/data', this._lang, 'answers.json'), 'utf8'))
this.process = { }
this.interOutput = { }
this.finalOutput = { }
this._socket = { }
this._stt = { }
this._tts = { }
this.broca = JSON.parse(
fs.readFileSync(
path.join(process.cwd(), 'core/data', this._lang, 'answers.json'),
'utf8'
)
)
this.process = {}
this.interOutput = {}
this.finalOutput = {}
this._socket = {}
this._stt = {}
this._tts = {}
log.title('Brain')
log.success('New instance')
}
get socket () {
get socket() {
return this._socket
}
set socket (newSocket) {
set socket(newSocket) {
this._socket = newSocket
}
get stt () {
get stt() {
return this._stt
}
set stt (newStt) {
set stt(newStt) {
this._stt = newStt
}
get tts () {
get tts() {
return this._tts
}
set tts (newTts) {
set tts(newTts) {
this._tts = newTts
}
get lang () {
get lang() {
return this._lang
}
set lang (newLang) {
set lang(newLang) {
this._lang = newLang
// Update broca
this.broca = JSON.parse(fs.readFileSync(path.join(process.cwd(), 'core/data', this._lang, 'answers.json'), 'utf8'))
this.broca = JSON.parse(
fs.readFileSync(
path.join(process.cwd(), 'core/data', this._lang, 'answers.json'),
'utf8'
)
)
if (process.env.LEON_TTS === 'true') {
this._tts.init(this._lang, () => {
@ -69,7 +79,7 @@ class Brain {
/**
* Delete intent object file
*/
static deleteIntentObjFile (intentObjectPath) {
static deleteIntentObjFile(intentObjectPath) {
try {
if (fs.existsSync(intentObjectPath)) {
fs.unlinkSync(intentObjectPath)
@ -82,7 +92,7 @@ class Brain {
/**
* Make Leon talk
*/
talk (rawSpeech, end = false) {
talk(rawSpeech, end = false) {
log.title('Leon')
log.info('Talking...')
@ -101,7 +111,7 @@ class Brain {
/**
* Pickup speech info we need to return
*/
wernicke (type, key, obj) {
wernicke(type, key, obj) {
let answer = ''
// Choose a random answer or a specific one
@ -129,7 +139,7 @@ class Brain {
* Execute Python skills
* TODO: split into several methods
*/
execute (obj, opts) {
execute(obj, opts) {
const executionTimeStart = Date.now()
opts = opts || {
mute: false // Close Leon mouth e.g. over HTTP
@ -137,11 +147,17 @@ class Brain {
return new Promise(async (resolve, reject) => {
const utteranceId = `${Date.now()}-${string.random(4)}`
const intentObjectPath = path.join(__dirname, `../tmp/${utteranceId}.json`)
const intentObjectPath = path.join(
__dirname,
`../tmp/${utteranceId}.json`
)
const speeches = []
// Ask to repeat if Leon is not sure about the request
if (obj.classification.confidence < langs[lang.getLongCode(this._lang)].min_confidence) {
if (
obj.classification.confidence <
langs[lang.getLongCode(this._lang)].min_confidence
) {
if (!opts.mute) {
const speech = `${this.wernicke('random_not_sure')}.`
@ -158,11 +174,18 @@ class Brain {
executionTime
})
} else {
const { configDataFilePath, classification: { action: actionName } } = obj
const { actions } = JSON.parse(fs.readFileSync(configDataFilePath, 'utf8'))
const {
configDataFilePath,
classification: { action: actionName }
} = obj
const { actions } = JSON.parse(
fs.readFileSync(configDataFilePath, 'utf8')
)
const action = actions[actionName]
const { type: actionType } = action
const nextAction = action.next_action ? actions[action.next_action] : null
const nextAction = action.next_action
? actions[action.next_action]
: null
if (actionType === 'logic') {
/**
@ -179,7 +202,7 @@ class Brain {
* 3. Run: PIPENV_PIPFILE=bridges/python/Pipfile pipenv run
* python bridges/python/main.py server/src/intent-object.sample.json
*/
const slots = { }
const slots = {}
if (obj.slots) {
Object.keys(obj.slots)?.forEach((slotName) => {
slots[slotName] = obj.slots[slotName].value
@ -201,7 +224,10 @@ class Brain {
try {
fs.writeFileSync(intentObjectPath, JSON.stringify(intentObj))
this.process = spawn(`pipenv run python bridges/python/main.py ${intentObjectPath}`, { shell: true })
this.process = spawn(
`pipenv run python bridges/python/main.py ${intentObjectPath}`,
{ shell: true }
)
} catch (e) {
log.error(`Failed to save intent object: ${e}`)
}
@ -210,7 +236,10 @@ class Brain {
const domainName = obj.classification.domain
const skillName = obj.classification.skill
const { name: domainFriendlyName } = domain.getDomainInfo(domainName)
const { name: skillFriendlyName } = domain.getSkillInfo(domainName, skillName)
const { name: skillFriendlyName } = domain.getSkillInfo(
domainName,
skillName
)
let output = ''
// Read output
@ -240,7 +269,9 @@ class Brain {
/* istanbul ignore next */
reject({
type: 'warning',
obj: new Error(`The "${skillFriendlyName}" skill from the "${domainFriendlyName}" domain is not well configured. Check the configuration file.`),
obj: new Error(
`The "${skillFriendlyName}" skill from the "${domainFriendlyName}" domain is not well configured. Check the configuration file.`
),
speeches,
executionTime
})
@ -252,7 +283,9 @@ class Brain {
/* istanbul ignore next */
reject({
type: 'error',
obj: new Error(`The "${skillFriendlyName}" skill from the "${domainFriendlyName}" domain isn't returning JSON format.`),
obj: new Error(
`The "${skillFriendlyName}" skill from the "${domainFriendlyName}" domain isn't returning JSON format.`
),
speeches,
executionTime
})
@ -261,8 +294,10 @@ class Brain {
// Handle error
this.process.stderr.on('data', (data) => {
const speech = `${this.wernicke('random_skill_errors', '',
{ '%skill_name%': skillFriendlyName, '%domain_name%': domainFriendlyName })}!`
const speech = `${this.wernicke('random_skill_errors', '', {
'%skill_name%': skillFriendlyName,
'%domain_name%': domainFriendlyName
})}!`
if (!opts.mute) {
this.talk(speech)
this._socket.emit('is-typing', false)
@ -305,8 +340,12 @@ class Brain {
/* istanbul ignore next */
// Synchronize the downloaded content if enabled
if (this.finalOutput.type === 'end' && this.finalOutput.options.synchronization && this.finalOutput.options.synchronization.enabled
&& this.finalOutput.options.synchronization.enabled === true) {
if (
this.finalOutput.type === 'end' &&
this.finalOutput.options.synchronization &&
this.finalOutput.options.synchronization.enabled &&
this.finalOutput.options.synchronization.enabled === true
) {
const sync = new Synchronizer(
this,
obj.classification,
@ -334,7 +373,10 @@ class Brain {
const executionTime = executionTimeEnd - executionTimeStart
// Send suggestions to the client
if (nextAction?.suggestions && this.finalOutput.core?.showNextActionSuggestions) {
if (
nextAction?.suggestions &&
this.finalOutput.core?.showNextActionSuggestions
) {
this._socket.emit('suggest', nextAction.suggestions)
}
if (action?.suggestions && this.finalOutput.core?.showSuggestions) {
@ -354,30 +396,43 @@ class Brain {
})
// Reset the child process
this.process = { }
this.process = {}
} else {
/**
* "Dialog" action skill execution
*/
const configFilePath = path.join(
process.cwd(), 'skills', obj.classification.domain, obj.classification.skill, 'config', `${this._lang}.json`
process.cwd(),
'skills',
obj.classification.domain,
obj.classification.skill,
'config',
`${this._lang}.json`
)
const { actions, entities } = await json.loadConfigData(
configFilePath,
this._lang
)
const { actions, entities } = await json.loadConfigData(configFilePath, this._lang)
const utteranceHasEntities = obj.entities.length > 0
const { answers: rawAnswers } = obj
let answers = rawAnswers
let answer = ''
if (!utteranceHasEntities) {
answers = answers.filter(({ answer }) => answer.indexOf('{{') === -1)
answers = answers.filter(
({ answer }) => answer.indexOf('{{') === -1
)
} else {
answers = answers.filter(({ answer }) => answer.indexOf('{{') !== -1)
answers = answers.filter(
({ answer }) => answer.indexOf('{{') !== -1
)
}
// When answers are simple without required entity
if (answers.length === 0) {
answer = rawAnswers[Math.floor(Math.random() * rawAnswers.length)]?.answer
answer =
rawAnswers[Math.floor(Math.random() * rawAnswers.length)]?.answer
// In case the expected answer requires a known entity
if (answer.indexOf('{{') !== -1) {
@ -394,7 +449,9 @@ class Brain {
*/
if (utteranceHasEntities && answer.indexOf('{{') !== -1) {
obj.currentEntities.forEach((entityObj) => {
answer = string.pnr(answer, { [`{{ ${entityObj.entity} }}`]: entityObj.resolution.value })
answer = string.pnr(answer, {
[`{{ ${entityObj.entity} }}`]: entityObj.resolution.value
})
// Find matches and map deeper data from the NLU file (global entities)
const matches = answer.match(/{{.+?}}/g)
@ -408,10 +465,13 @@ class Brain {
if (entity === entityObj.entity) {
// e.g. entities.color.options.red.data.usage
const valuesArr = entities[entity].options[entityObj.option].data[dataKey]
const valuesArr =
entities[entity].options[entityObj.option].data[dataKey]
answer = string.pnr(answer,
{ [match]: valuesArr[Math.floor(Math.random() * valuesArr.length)] })
answer = string.pnr(answer, {
[match]:
valuesArr[Math.floor(Math.random() * valuesArr.length)]
})
}
})
})

View File

@ -9,7 +9,7 @@ const defaultActiveContext = {
intent: null,
currentEntities: [],
entities: [],
slots: { },
slots: {},
isInActionLoop: false,
nextAction: null,
originalUtterance: null,
@ -17,28 +17,28 @@ const defaultActiveContext = {
}
class Conversation {
constructor (id = 'conv0') {
constructor(id = 'conv0') {
// Identify conversations to allow more features in the future (multiple speakers, etc.)
this._id = id
this._activeContext = defaultActiveContext
this._previousContexts = { }
this._previousContexts = {}
log.title('Conversation')
log.success('New instance')
}
get id () {
get id() {
return this._id
}
get activeContext () {
get activeContext() {
return this._activeContext
}
/**
* Activate context according to the triggered action
*/
set activeContext (contextObj) {
set activeContext(contextObj) {
const {
slots,
isInActionLoop,
@ -71,7 +71,7 @@ class Conversation {
intent,
currentEntities: [],
entities: [],
slots: { },
slots: {},
isInActionLoop,
nextAction,
originalUtterance: contextObj.originalUtterance,
@ -87,7 +87,10 @@ class Conversation {
const [skillName] = intent.split('.')
const newContextName = `${domain}.${skillName}`
if (this._activeContext.name && this._activeContext.name !== newContextName) {
if (
this._activeContext.name &&
this._activeContext.name !== newContextName
) {
this.cleanActiveContext()
}
@ -103,7 +106,7 @@ class Conversation {
intent,
currentEntities: entities,
entities,
slots: { },
slots: {},
isInActionLoop,
nextAction,
originalUtterance: contextObj.originalUtterance,
@ -120,21 +123,21 @@ class Conversation {
}
}
get previousContexts () {
get previousContexts() {
return this._previousContexts
}
/**
* Check whether there is an active context
*/
hasActiveContext () {
hasActiveContext() {
return !!this._activeContext.name
}
/**
* Set slots in active context
*/
setSlots (lang, entities, slots = this._activeContext.slots) {
setSlots(lang, entities, slots = this._activeContext.slots) {
const slotKeys = Object.keys(slots)
for (let i = 0; i < slotKeys.length; i += 1) {
@ -147,14 +150,16 @@ class Conversation {
// If it's the first slot setting grabbed from the model or not
if (isFirstSet) {
[slotName, slotEntity] = key.split('#')
;[slotName, slotEntity] = key.split('#')
questions = slotObj.locales[lang]
}
// Match the slot with the submitted entity and ensure the slot hasn't been filled yet
const [foundEntity] = entities
.filter(({ entity }) => entity === slotEntity && !slotObj.isFilled)
const pickedQuestion = questions[Math.floor(Math.random() * questions.length)]
const [foundEntity] = entities.filter(
({ entity }) => entity === slotEntity && !slotObj.isFilled
)
const pickedQuestion =
questions[Math.floor(Math.random() * questions.length)]
const slot = this._activeContext.slots[slotName]
const newSlot = {
name: slotName,
@ -171,13 +176,20 @@ class Conversation {
* or if it already set but the value has changed
* then set the slot
*/
if (!slot || !slot.isFilled
|| (slot.isFilled && newSlot.isFilled
&& slot.value.resolution.value !== newSlot.value.resolution.value)
if (
!slot ||
!slot.isFilled ||
(slot.isFilled &&
newSlot.isFilled &&
slot.value.resolution.value !== newSlot.value.resolution.value)
) {
if (newSlot?.isFilled) {
log.title('Conversation')
log.success(`Slot filled: { name: ${newSlot.name}, value: ${JSON.stringify(newSlot.value)} }`)
log.success(
`Slot filled: { name: ${newSlot.name}, value: ${JSON.stringify(
newSlot.value
)} }`
)
}
this._activeContext.slots[slotName] = newSlot
entities.shift()
@ -188,10 +200,11 @@ class Conversation {
/**
* Get the not yet filled slot if there is any
*/
getNotFilledSlot () {
getNotFilledSlot() {
const slotsKeys = Object.keys(this._activeContext.slots)
const [notFilledSlotKey] = slotsKeys
.filter((slotKey) => !this._activeContext.slots[slotKey].isFilled)
const [notFilledSlotKey] = slotsKeys.filter(
(slotKey) => !this._activeContext.slots[slotKey].isFilled
)
return this._activeContext.slots[notFilledSlotKey]
}
@ -199,14 +212,14 @@ class Conversation {
/**
* Check whether slots are all filled
*/
areSlotsAllFilled () {
areSlotsAllFilled() {
return !this.getNotFilledSlot()
}
/**
* Clean up active context
*/
cleanActiveContext () {
cleanActiveContext() {
log.title('Conversation')
log.info('Clean active context')
@ -217,7 +230,7 @@ class Conversation {
/**
* Push active context to the previous contexts stack
*/
pushToPreviousContextsStack () {
pushToPreviousContextsStack() {
const previousContextsKeys = Object.keys(this._previousContexts)
// Remove the oldest context from the history stack if it reaches the maximum limit

View File

@ -20,7 +20,11 @@ const getDownloads = async (fastify, options) => {
let message = ''
if (request.query.domain && request.query.skill) {
const dlDomainDir = path.join(process.cwd(), 'downloads', request.query.domain)
const dlDomainDir = path.join(
process.cwd(),
'downloads',
request.query.domain
)
const skill = path.join(dlDomainDir, `${request.query.skill}.py`)
log.info(
@ -59,8 +63,8 @@ const getDownloads = async (fastify, options) => {
for (let i = 0; i < domainsFiles.length; i += 1) {
if (
domainsFiles[i].indexOf('.zip') !== -1
&& domainsFiles[i].indexOf(zipSlug) !== -1
domainsFiles[i].indexOf('.zip') !== -1 &&
domainsFiles[i].indexOf(zipSlug) !== -1
) {
fs.unlinkSync(`${dlDomainDir}/${domainsFiles[i]}`)
log.success(`${domainsFiles[i]} archive deleted`)

View File

@ -1,10 +1,7 @@
const corsMidd = async (request, reply) => {
// Allow only a specific client to request to the API (depending of the env)
if (process.env.LEON_NODE_ENV !== 'production') {
reply.header(
'Access-Control-Allow-Origin',
`${process.env.LEON_HOST}:3000`
)
reply.header('Access-Control-Allow-Origin', `${process.env.LEON_HOST}:3000`)
}
// Allow several headers for our requests

View File

@ -18,12 +18,12 @@ import downloadsPlugin from '@/core/http-server/api/downloads'
import log from '@/helpers/log'
import date from '@/helpers/date'
const server = { }
const server = {}
let mainProvider = {
id: 1,
brain: { },
nlu: { }
brain: {},
nlu: {}
}
let providers = []
const createProvider = async (id) => {
@ -33,9 +33,15 @@ const createProvider = async (id) => {
// Load NLP models
try {
await Promise.all([
nlu.loadGlobalResolversModel(join(process.cwd(), 'core/data/models/leon-global-resolvers-model.nlp')),
nlu.loadSkillsResolversModel(join(process.cwd(), 'core/data/models/leon-skills-resolvers-model.nlp')),
nlu.loadMainModel(join(process.cwd(), 'core/data/models/leon-main-model.nlp'))
nlu.loadGlobalResolversModel(
join(process.cwd(), 'core/data/models/leon-global-resolvers-model.nlp')
),
nlu.loadSkillsResolversModel(
join(process.cwd(), 'core/data/models/leon-skills-resolvers-model.nlp')
),
nlu.loadMainModel(
join(process.cwd(), 'core/data/models/leon-main-model.nlp')
)
])
return {
@ -74,14 +80,14 @@ const deleteProvider = (id) => {
if (id === '1') {
mainProvider = {
id: 1,
brain: { },
nlu: { }
brain: {},
nlu: {}
}
}
}
server.fastify = Fastify()
server.httpServer = { }
server.httpServer = {}
/**
* Generate skills routes
@ -93,7 +99,7 @@ server.generateSkillsRoutes = (instance) => {
instance.route({
method: endpoint.method,
url: endpoint.route,
async handler (request, reply) {
async handler(request, reply) {
const timeout = endpoint.timeout || 60000
const [, , , domain, skill, action] = endpoint.route.split('/')
const handleRoute = async () => {
@ -112,7 +118,8 @@ server.generateSkillsRoutes = (instance) => {
entity: param,
resolution: { ...value }
}
let entity = endpoint?.entitiesType === 'trim' ? trimEntity : builtInEntity
let entity =
endpoint?.entitiesType === 'trim' ? trimEntity : builtInEntity
if (Array.isArray(value)) {
value.forEach((v) => {
@ -248,7 +255,9 @@ server.handleOnConnection = (socket) => {
const utterance = data.value
try {
await provider.nlu.process(utterance)
} catch (e) { /* */ }
} catch (e) {
/* */
}
})
// Handle automatic speech recognition
@ -271,9 +280,12 @@ server.handleOnConnection = (socket) => {
* Launch server
*/
server.listen = async (port) => {
const io = process.env.LEON_NODE_ENV === 'development'
? socketio(server.httpServer, { cors: { origin: `${process.env.LEON_HOST}:3000` } })
: socketio(server.httpServer)
const io =
process.env.LEON_NODE_ENV === 'development'
? socketio(server.httpServer, {
cors: { origin: `${process.env.LEON_HOST}:3000` }
})
: socketio(server.httpServer)
io.on('connection', server.handleOnConnection)
@ -351,7 +363,7 @@ server.init = async () => {
log.success(`The current time zone is ${date.timeZone()}`)
const sLogger = (process.env.LEON_LOGGER !== 'true') ? 'disabled' : 'enabled'
const sLogger = process.env.LEON_LOGGER !== 'true' ? 'disabled' : 'enabled'
log.success(`Collaborative logger ${sLogger}`)
await addProvider('1')

View File

@ -9,23 +9,25 @@ import log from '@/helpers/log'
import string from '@/helpers/string'
class Ner {
constructor (ner) {
constructor(ner) {
this.ner = ner
log.title('NER')
log.success('New instance')
}
static logExtraction (entities) {
static logExtraction(entities) {
log.title('NER')
log.success('Entities found:')
entities.forEach((ent) => log.success(`{ value: ${ent.sourceText}, entity: ${ent.entity} }`))
entities.forEach((ent) =>
log.success(`{ value: ${ent.sourceText}, entity: ${ent.entity} }`)
)
}
/**
* Grab entities and match them with the utterance
*/
extractEntities (lang, utteranceSamplesFilePath, obj) {
extractEntities(lang, utteranceSamplesFilePath, obj) {
return new Promise(async (resolve) => {
log.title('NER')
log.info('Searching for entities...')
@ -33,7 +35,9 @@ class Ner {
const { classification } = obj
// Remove end-punctuation and add an end-whitespace
const utterance = `${string.removeEndPunctuation(obj.utterance)} `
const { actions } = JSON.parse(fs.readFileSync(utteranceSamplesFilePath, 'utf8'))
const { actions } = JSON.parse(
fs.readFileSync(utteranceSamplesFilePath, 'utf8')
)
const { action } = classification
const promises = []
const actionEntities = actions[action].entities || []
@ -56,7 +60,10 @@ class Ner {
await Promise.all(promises)
const { entities } = await this.ner.process({ locale: lang, text: utterance })
const { entities } = await this.ner.process({
locale: lang,
text: utterance
})
// Normalize entities
entities.map((entity) => {
@ -86,14 +93,17 @@ class Ner {
/**
* Get spaCy entities from the TCP server
*/
static getSpacyEntities (utterance) {
static getSpacyEntities(utterance) {
return new Promise((resolve) => {
const spacyEntitiesReceivedHandler = async ({ spacyEntities }) => {
resolve(spacyEntities)
}
global.tcpClient.ee.removeAllListeners()
global.tcpClient.ee.on('spacy-entities-received', spacyEntitiesReceivedHandler)
global.tcpClient.ee.on(
'spacy-entities-received',
spacyEntitiesReceivedHandler
)
global.tcpClient.emit('get-spacy-entities', utterance)
})
@ -102,23 +112,30 @@ class Ner {
/**
* Inject trim type entities
*/
injectTrimEntity (lang, entity) {
injectTrimEntity(lang, entity) {
return new Promise((resolve) => {
for (let j = 0; j < entity.conditions.length; j += 1) {
const condition = entity.conditions[j]
const conditionMethod = `add${string.snakeToPascalCase(condition.type)}Condition`
const conditionMethod = `add${string.snakeToPascalCase(
condition.type
)}Condition`
if (condition.type === 'between') {
/**
* Conditions: https://github.com/axa-group/nlp.js/blob/master/docs/v3/ner-manager.md#trim-named-entities
* e.g. list.addBetweenCondition('en', 'list', 'create a', 'list')
*/
this.ner[conditionMethod](lang, entity.name, condition.from, condition.to)
this.ner[conditionMethod](
lang,
entity.name,
condition.from,
condition.to
)
} else if (condition.type.indexOf('after') !== -1) {
const rule = {
type: 'afterLast',
words: condition.from,
options: { }
options: {}
}
this.ner.addRule(lang, entity.name, 'trim', rule)
this.ner[conditionMethod](lang, entity.name, condition.from)
@ -134,7 +151,7 @@ class Ner {
/**
* Inject regex type entities
*/
injectRegexEntity (lang, entity) {
injectRegexEntity(lang, entity) {
return new Promise((resolve) => {
this.ner.addRegexRule(lang, entity.name, new RegExp(entity.regex, 'g'))
@ -145,7 +162,7 @@ class Ner {
/**
* Inject enum type entities
*/
injectEnumEntity (lang, entity) {
injectEnumEntity(lang, entity) {
return new Promise((resolve) => {
const { name: entityName, options } = entity
const optionKeys = Object.keys(options)
@ -164,7 +181,7 @@ class Ner {
* Get Microsoft builtin entities
* https://github.com/axa-group/nlp.js/blob/master/packages/builtin-microsoft/src/builtin-microsoft.js
*/
static getMicrosoftBuiltinEntities () {
static getMicrosoftBuiltinEntities() {
return [
'Number',
'Ordinal',

View File

@ -35,13 +35,13 @@ const defaultNluResultObj = {
}
class Nlu {
constructor (brain) {
constructor(brain) {
this.brain = brain
this.request = request
this.globalResolversNlp = { }
this.skillsResolversNlp = { }
this.mainNlp = { }
this.ner = { }
this.globalResolversNlp = {}
this.skillsResolversNlp = {}
this.mainNlp = {}
this.ner = {}
this.conv = new Conversation('conv0')
this.nluResultObj = defaultNluResultObj // TODO
@ -52,11 +52,16 @@ class Nlu {
/**
* Load the global resolvers NLP model from the latest training
*/
loadGlobalResolversModel (nlpModel) {
loadGlobalResolversModel(nlpModel) {
return new Promise(async (resolve, reject) => {
if (!fs.existsSync(nlpModel)) {
log.title('NLU')
reject({ type: 'warning', obj: new Error('The global resolvers NLP model does not exist, please run: npm run train') })
reject({
type: 'warning',
obj: new Error(
'The global resolvers NLP model does not exist, please run: npm run train'
)
})
} else {
log.title('NLU')
@ -75,7 +80,13 @@ class Nlu {
resolve()
} catch (err) {
this.brain.talk(`${this.brain.wernicke('random_errors')}! ${this.brain.wernicke('errors', 'nlu', { '%error%': err.message })}.`)
this.brain.talk(
`${this.brain.wernicke('random_errors')}! ${this.brain.wernicke(
'errors',
'nlu',
{ '%error%': err.message }
)}.`
)
this.brain.socket.emit('is-typing', false)
reject({ type: 'error', obj: err })
@ -87,11 +98,16 @@ class Nlu {
/**
* Load the skills resolvers NLP model from the latest training
*/
loadSkillsResolversModel (nlpModel) {
loadSkillsResolversModel(nlpModel) {
return new Promise(async (resolve, reject) => {
if (!fs.existsSync(nlpModel)) {
log.title('NLU')
reject({ type: 'warning', obj: new Error('The skills resolvers NLP model does not exist, please run: npm run train') })
reject({
type: 'warning',
obj: new Error(
'The skills resolvers NLP model does not exist, please run: npm run train'
)
})
} else {
log.title('NLU')
@ -110,7 +126,13 @@ class Nlu {
resolve()
} catch (err) {
this.brain.talk(`${this.brain.wernicke('random_errors')}! ${this.brain.wernicke('errors', 'nlu', { '%error%': err.message })}.`)
this.brain.talk(
`${this.brain.wernicke('random_errors')}! ${this.brain.wernicke(
'errors',
'nlu',
{ '%error%': err.message }
)}.`
)
this.brain.socket.emit('is-typing', false)
reject({ type: 'error', obj: err })
@ -122,20 +144,29 @@ class Nlu {
/**
* Load the main NLP model from the latest training
*/
loadMainModel (nlpModel) {
loadMainModel(nlpModel) {
return new Promise(async (resolve, reject) => {
if (!fs.existsSync(nlpModel)) {
log.title('NLU')
reject({ type: 'warning', obj: new Error('The main NLP model does not exist, please run: npm run train') })
reject({
type: 'warning',
obj: new Error(
'The main NLP model does not exist, please run: npm run train'
)
})
} else {
log.title('NLU')
try {
const container = await containerBootstrap()
container.register('extract-builtin-??', new BuiltinMicrosoft({
builtins: Ner.getMicrosoftBuiltinEntities()
}), true)
container.register(
'extract-builtin-??',
new BuiltinMicrosoft({
builtins: Ner.getMicrosoftBuiltinEntities()
}),
true
)
container.use(Nlp)
container.use(LangAll)
@ -150,7 +181,13 @@ class Nlu {
resolve()
} catch (err) {
this.brain.talk(`${this.brain.wernicke('random_errors')}! ${this.brain.wernicke('errors', 'nlu', { '%error%': err.message })}.`)
this.brain.talk(
`${this.brain.wernicke('random_errors')}! ${this.brain.wernicke(
'errors',
'nlu',
{ '%error%': err.message }
)}.`
)
this.brain.socket.emit('is-typing', false)
reject({ type: 'error', obj: err })
@ -162,16 +199,18 @@ class Nlu {
/**
* Check if NLP models exists
*/
hasNlpModels () {
return Object.keys(this.globalResolversNlp).length > 0
&& Object.keys(this.skillsResolversNlp).length > 0
&& Object.keys(this.mainNlp).length > 0
hasNlpModels() {
return (
Object.keys(this.globalResolversNlp).length > 0 &&
Object.keys(this.skillsResolversNlp).length > 0 &&
Object.keys(this.mainNlp).length > 0
)
}
/**
* Set new language; recreate a new TCP server with new language; and reprocess understanding
*/
switchLanguage (utterance, locale, opts) {
switchLanguage(utterance, locale, opts) {
const connectedHandler = async () => {
await this.process(utterance, opts)
}
@ -181,7 +220,10 @@ class Nlu {
// Recreate a new TCP server process and reconnect the TCP client
kill(global.tcpServerProcess.pid, () => {
global.tcpServerProcess = spawn(`pipenv run python bridges/python/tcp_server/main.py ${locale}`, { shell: true })
global.tcpServerProcess = spawn(
`pipenv run python bridges/python/tcp_server/main.py ${locale}`,
{ shell: true }
)
global.tcpClient = new TcpClient(
process.env.LEON_PY_TCP_SERVER_HOST,
@ -192,15 +234,18 @@ class Nlu {
global.tcpClient.ee.on('connected', connectedHandler)
})
return { }
return {}
}
/**
* Collaborative logger request
*/
sendLog (utterance) {
sendLog(utterance) {
/* istanbul ignore next */
if (process.env.LEON_LOGGER === 'true' && process.env.LEON_NODE_ENV !== 'testing') {
if (
process.env.LEON_LOGGER === 'true' &&
process.env.LEON_NODE_ENV !== 'testing'
) {
this.request
.post('https://logger.getleon.ai/v1/expressions')
.set('X-Origin', 'leon-core')
@ -210,15 +255,19 @@ class Nlu {
lang: this.brain.lang,
classification: this.nluResultObj.classification
})
.then(() => { /* */ })
.catch(() => { /* */ })
.then(() => {
/* */
})
.catch(() => {
/* */
})
}
}
/**
* Merge spaCy entities with the current NER instance
*/
async mergeSpacyEntities (utterance) {
async mergeSpacyEntities(utterance) {
const spacyEntities = await Ner.getSpacyEntities(utterance)
if (spacyEntities.length > 0) {
spacyEntities.forEach(({ entity, resolution }) => {
@ -238,10 +287,16 @@ class Nlu {
/**
* Handle in action loop logic before NLU processing
*/
async handleActionLoop (utterance, opts) {
async handleActionLoop(utterance, opts) {
const { domain, intent } = this.conv.activeContext
const [skillName, actionName] = intent.split('.')
const configDataFilePath = join(process.cwd(), 'skills', domain, skillName, `config/${this.brain.lang}.json`)
const configDataFilePath = join(
process.cwd(),
'skills',
domain,
skillName,
`config/${this.brain.lang}.json`
)
this.nluResultObj = {
...defaultNluResultObj, // Reset entities, slots, etc.
slots: this.conv.activeContext.slots,
@ -260,17 +315,20 @@ class Nlu {
this.nluResultObj
)
const { actions, resolvers } = JSON.parse(fs.readFileSync(configDataFilePath, 'utf8'))
const { actions, resolvers } = JSON.parse(
fs.readFileSync(configDataFilePath, 'utf8')
)
const action = actions[this.nluResultObj.classification.action]
const {
name: expectedItemName, type: expectedItemType
} = action.loop.expected_item
const { name: expectedItemName, type: expectedItemType } =
action.loop.expected_item
let hasMatchingEntity = false
let hasMatchingResolver = false
if (expectedItemType === 'entity') {
hasMatchingEntity = this.nluResultObj
.entities.filter(({ entity }) => expectedItemName === entity).length > 0
hasMatchingEntity =
this.nluResultObj.entities.filter(
({ entity }) => expectedItemName === entity
).length > 0
} else if (expectedItemType.indexOf('resolver') !== -1) {
const nlpObjs = {
global_resolver: this.globalResolversNlp,
@ -280,7 +338,12 @@ class Nlu {
const { intent } = result
const resolveResolvers = (resolver, intent) => {
const resolversPath = join(process.cwd(), 'core/data', this.brain.lang, 'global-resolvers')
const resolversPath = join(
process.cwd(),
'core/data',
this.brain.lang,
'global-resolvers'
)
// Load the skill resolver or the global resolver
const resolvedIntents = !intent.includes('resolver.global')
? resolvers[resolver]
@ -289,18 +352,26 @@ class Nlu {
// E.g. resolver.global.denial -> denial
intent = intent.substring(intent.lastIndexOf('.') + 1)
return [{
name: expectedItemName,
value: resolvedIntents.intents[intent].value
}]
return [
{
name: expectedItemName,
value: resolvedIntents.intents[intent].value
}
]
}
// Resolve resolver if global resolver or skill resolver has been found
if (intent && (intent.includes('resolver.global') || intent.includes(`resolver.${skillName}`))) {
if (
intent &&
(intent.includes('resolver.global') ||
intent.includes(`resolver.${skillName}`))
) {
log.title('NLU')
log.success('Resolvers resolved:')
this.nluResultObj.resolvers = resolveResolvers(expectedItemName, intent)
this.nluResultObj.resolvers.forEach((resolver) => log.success(`${intent}: ${JSON.stringify(resolver)}`))
this.nluResultObj.resolvers.forEach((resolver) =>
log.success(`${intent}: ${JSON.stringify(resolver)}`)
)
hasMatchingResolver = this.nluResultObj.resolvers.length > 0
}
}
@ -314,7 +385,9 @@ class Nlu {
}
try {
const processedData = await this.brain.execute(this.nluResultObj, { mute: opts.mute })
const processedData = await this.brain.execute(this.nluResultObj, {
mute: opts.mute
})
// Reprocess with the original utterance that triggered the context at first
if (processedData.core?.restart === true) {
const { originalUtterance } = this.conv.activeContext
@ -328,7 +401,10 @@ class Nlu {
* In case there is no next action to prepare anymore
* and there is an explicit stop of the loop from the skill
*/
if (!processedData.action.next_action && processedData.core?.isInActionLoop === false) {
if (
!processedData.action.next_action &&
processedData.core?.isInActionLoop === false
) {
this.conv.cleanActiveContext()
return null
}
@ -349,7 +425,7 @@ class Nlu {
/**
* Handle slot filling
*/
async handleSlotFilling (utterance, opts) {
async handleSlotFilling(utterance, opts) {
const processedData = await this.slotFill(utterance, opts)
/**
@ -386,7 +462,7 @@ class Nlu {
* pick-up the right classification
* and extract entities
*/
process (utterance, opts) {
process(utterance, opts) {
const processingTimeStart = Date.now()
return new Promise(async (resolve, reject) => {
@ -403,7 +479,8 @@ class Nlu {
this.brain.socket.emit('is-typing', false)
}
const msg = 'The NLP model is missing, please rebuild the project or if you are in dev run: npm run train'
const msg =
'The NLP model is missing, please rebuild the project or if you are in dev run: npm run train'
log.error(msg)
return reject(msg)
}
@ -423,15 +500,13 @@ class Nlu {
try {
return resolve(await this.handleSlotFilling(utterance, opts))
} catch (e) {
return reject({ })
return reject({})
}
}
}
const result = await this.mainNlp.process(utterance)
const {
locale, answers, classifications
} = result
const { locale, answers, classifications } = result
let { score, intent, domain } = result
/**
@ -470,9 +545,12 @@ class Nlu {
// Language isn't supported
if (!lang.getShortLangs().includes(locale)) {
this.brain.talk(`${this.brain.wernicke('random_language_not_supported')}.`, true)
this.brain.talk(
`${this.brain.wernicke('random_language_not_supported')}.`,
true
)
this.brain.socket.emit('is-typing', false)
return resolve({ })
return resolve({})
}
// Trigger language switching
@ -483,11 +561,16 @@ class Nlu {
this.sendLog()
if (intent === 'None') {
const fallback = this.fallback(langs[lang.getLongCode(locale)].fallbacks)
const fallback = this.fallback(
langs[lang.getLongCode(locale)].fallbacks
)
if (fallback === false) {
if (!opts.mute) {
this.brain.talk(`${this.brain.wernicke('random_unknown_intents')}.`, true)
this.brain.talk(
`${this.brain.wernicke('random_unknown_intents')}.`,
true
)
this.brain.socket.emit('is-typing', false)
}
@ -508,9 +591,17 @@ class Nlu {
}
log.title('NLU')
log.success(`Intent found: ${this.nluResultObj.classification.skill}.${this.nluResultObj.classification.action} (domain: ${this.nluResultObj.classification.domain})`)
log.success(
`Intent found: ${this.nluResultObj.classification.skill}.${this.nluResultObj.classification.action} (domain: ${this.nluResultObj.classification.domain})`
)
const configDataFilePath = join(process.cwd(), 'skills', this.nluResultObj.classification.domain, this.nluResultObj.classification.skill, `config/${this.brain.lang}.json`)
const configDataFilePath = join(
process.cwd(),
'skills',
this.nluResultObj.classification.domain,
this.nluResultObj.classification.skill,
`config/${this.brain.lang}.json`
)
this.nluResultObj.configDataFilePath = configDataFilePath
try {
@ -531,15 +622,18 @@ class Nlu {
const shouldSlotLoop = await this.routeSlotFilling(intent)
if (shouldSlotLoop) {
return resolve({ })
return resolve({})
}
// In case all slots have been filled in the first utterance
if (this.conv.hasActiveContext() && Object.keys(this.conv.activeContext.slots).length > 0) {
if (
this.conv.hasActiveContext() &&
Object.keys(this.conv.activeContext.slots).length > 0
) {
try {
return resolve(await this.handleSlotFilling(utterance, opts))
} catch (e) {
return reject({ })
return reject({})
}
}
@ -549,7 +643,7 @@ class Nlu {
}
this.conv.activeContext = {
lang: this.brain.lang,
slots: { },
slots: {},
isInActionLoop: false,
originalUtterance: this.nluResultObj.utterance,
configDataFilePath: this.nluResultObj.configDataFilePath,
@ -559,19 +653,22 @@ class Nlu {
entities: this.nluResultObj.entities
}
// Pass current utterance entities to the NLU result object
this.nluResultObj.currentEntities = this.conv.activeContext.currentEntities
this.nluResultObj.currentEntities =
this.conv.activeContext.currentEntities
// Pass context entities to the NLU result object
this.nluResultObj.entities = this.conv.activeContext.entities
try {
const processedData = await this.brain.execute(this.nluResultObj, { mute: opts.mute })
const processedData = await this.brain.execute(this.nluResultObj, {
mute: opts.mute
})
// Prepare next action if there is one queuing
if (processedData.nextAction) {
this.conv.cleanActiveContext()
this.conv.activeContext = {
lang: this.brain.lang,
slots: { },
slots: {},
isInActionLoop: !!processedData.nextAction.loop,
originalUtterance: processedData.utterance,
configDataFilePath: processedData.configDataFilePath,
@ -588,8 +685,7 @@ class Nlu {
return resolve({
processingTime, // In ms, total time
...processedData,
nluProcessingTime:
processingTime - processedData?.executionTime // In ms, NLU processing time only
nluProcessingTime: processingTime - processedData?.executionTime // In ms, NLU processing time only
})
} catch (e) /* istanbul ignore next */ {
log[e.type](e.obj.message)
@ -607,14 +703,20 @@ class Nlu {
* Build NLU data result object based on slots
* and ask for more entities if necessary
*/
async slotFill (utterance, opts) {
async slotFill(utterance, opts) {
if (!this.conv.activeContext.nextAction) {
return null
}
const { domain, intent } = this.conv.activeContext
const [skillName, actionName] = intent.split('.')
const configDataFilePath = join(process.cwd(), 'skills', domain, skillName, `config/${this.brain.lang}.json`)
const configDataFilePath = join(
process.cwd(),
'skills',
domain,
skillName,
`config/${this.brain.lang}.json`
)
this.nluResultObj = {
...defaultNluResultObj, // Reset entities, slots, etc.
@ -634,7 +736,9 @@ class Nlu {
// Continue to loop for questions if a slot has been filled correctly
let notFilledSlot = this.conv.getNotFilledSlot()
if (notFilledSlot && entities.length > 0) {
const hasMatch = entities.some(({ entity }) => entity === notFilledSlot.expectedEntity)
const hasMatch = entities.some(
({ entity }) => entity === notFilledSlot.expectedEntity
)
if (hasMatch) {
this.conv.setSlots(this.brain.lang, entities)
@ -644,7 +748,7 @@ class Nlu {
this.brain.talk(notFilledSlot.pickedQuestion)
this.brain.socket.emit('is-typing', false)
return { }
return {}
}
}
}
@ -655,7 +759,9 @@ class Nlu {
this.nluResultObj = {
...defaultNluResultObj, // Reset entities, slots, etc.
// Assign slots only if there is a next action
slots: this.conv.activeContext.nextAction ? this.conv.activeContext.slots : { },
slots: this.conv.activeContext.nextAction
? this.conv.activeContext.slots
: {},
utterance: this.conv.activeContext.originalUtterance,
configDataFilePath,
classification: {
@ -681,7 +787,7 @@ class Nlu {
* 2. If the context is expecting slots, then loop over questions to slot fill
* 3. Or go to the brain executor if all slots have been filled in one shot
*/
async routeSlotFilling (intent) {
async routeSlotFilling(intent) {
const slots = await this.mainNlp.slotManager.getMandatorySlots(intent)
const hasMandatorySlots = Object.keys(slots)?.length > 0
@ -701,9 +807,12 @@ class Nlu {
const notFilledSlot = this.conv.getNotFilledSlot()
// Loop for questions if a slot hasn't been filled
if (notFilledSlot) {
const { actions } = JSON.parse(fs.readFileSync(this.nluResultObj.configDataFilePath, 'utf8'))
const [currentSlot] = actions[this.nluResultObj.classification.action].slots
.filter(({ name }) => name === notFilledSlot.name)
const { actions } = JSON.parse(
fs.readFileSync(this.nluResultObj.configDataFilePath, 'utf8')
)
const [currentSlot] = actions[
this.nluResultObj.classification.action
].slots.filter(({ name }) => name === notFilledSlot.name)
this.brain.socket.emit('suggest', currentSlot.suggestions)
this.brain.talk(notFilledSlot.pickedQuestion)
@ -720,7 +829,7 @@ class Nlu {
* Pickup and compare the right fallback
* according to the wished skill action
*/
fallback (fallbacks) {
fallback(fallbacks) {
const words = this.nluResultObj.utterance.toLowerCase().split(' ')
if (fallbacks.length > 0) {

View File

@ -6,7 +6,7 @@ import { waterfall } from 'async'
import log from '@/helpers/log'
class Synchronizer {
constructor (brain, classification, sync) {
constructor(brain, classification, sync) {
this.brain = brain
this.classification = classification
this.sync = sync
@ -19,10 +19,15 @@ class Synchronizer {
/**
* Choose the right method to synchronize
*/
async synchronize (cb) {
async synchronize(cb) {
let code = 'synced_direct'
this.brain.talk(`${this.brain.wernicke('synchronizer', `syncing_${this.sync.method.toLowerCase().replace('-', '_')}`)}.`)
this.brain.talk(
`${this.brain.wernicke(
'synchronizer',
`syncing_${this.sync.method.toLowerCase().replace('-', '_')}`
)}.`
)
this.brain.socket.emit('is-typing', false)
if (this.sync.method === 'google-drive') {
@ -38,7 +43,7 @@ class Synchronizer {
/**
* Direct synchronization method
*/
direct () {
direct() {
return new Promise((resolve) => {
this.brain.socket.emit('download', {
domain: this.classification.domain,
@ -53,13 +58,21 @@ class Synchronizer {
/**
* Google Drive synchronization method
*/
googleDrive () {
googleDrive() {
/* istanbul ignore next */
return new Promise((resolve, reject) => {
const driveFolderName = `leon-${this.classification.domain}-${this.classification.skill}`
const folderMimeType = 'application/vnd.google-apps.folder'
const entities = fs.readdirSync(this.downloadDir)
const key = JSON.parse(fs.readFileSync(path.join(process.cwd(), 'core/config/synchronizer/google-drive.json'), 'utf8'))
const key = JSON.parse(
fs.readFileSync(
path.join(
process.cwd(),
'core/config/synchronizer/google-drive.json'
),
'utf8'
)
)
const authClient = new google.auth.JWT(
key.client_email,
key,
@ -74,125 +87,151 @@ class Synchronizer {
})
let folderId = ''
waterfall([
(cb) => {
drive.files.list({ }, (err, list) => {
if (err) {
log.error(`Error during listing: ${err}`)
return reject(err)
}
cb(null, list)
return true
})
},
(list, cb) => {
if (list.data.files.length === 0) {
return cb(null, false, folderId)
}
// Browse entities
for (let i = 0; i < list.data.files.length; i += 1) {
// In case the skill folder exists
if (list.data.files[i].mimeType === folderMimeType
&& list.data.files[i].name === driveFolderName) {
folderId = list.data.files[i].id
return cb(null, true, folderId)
} else if ((i + 1) === list.data.files.length) { // eslint-disable-line no-else-return
waterfall(
[
(cb) => {
drive.files.list({}, (err, list) => {
if (err) {
log.error(`Error during listing: ${err}`)
return reject(err)
}
cb(null, list)
return true
})
},
(list, cb) => {
if (list.data.files.length === 0) {
return cb(null, false, folderId)
}
// TODO: UI toolbox to reach this scope
// Delete Drive files
/* setTimeout(() => {
// Browse entities
for (let i = 0; i < list.data.files.length; i += 1) {
// In case the skill folder exists
if (
list.data.files[i].mimeType === folderMimeType &&
list.data.files[i].name === driveFolderName
) {
folderId = list.data.files[i].id
return cb(null, true, folderId)
} else if (i + 1 === list.data.files.length) {
// eslint-disable-line no-else-return
return cb(null, false, folderId)
}
// TODO: UI toolbox to reach this scope
// Delete Drive files
/* setTimeout(() => {
drive.files.delete({ fileId: list.data.files[i].id })
log.title('Synchronizer'); log.success(`"${list.data.files[i].id}" deleted`)
}, 200 * i) */
}
}
return false
},
(folderExists, folderId, cb) => {
if (folderExists === false) {
// Create the skill folder if it does not exist
drive.files.create({
resource: {
name: driveFolderName,
mimeType: folderMimeType
},
fields: 'id'
}, (err, folder) => {
if (err) {
log.error(`Error during the folder creation: ${err}`)
return reject(err)
}
folderId = folder.data.id
log.title('Synchronizer'); log.success(`"${driveFolderName}" folder created on Google Drive`)
// Give ownership
return drive.permissions.create({
resource: {
type: 'user',
role: 'owner',
emailAddress: this.sync.email
return false
},
(folderExists, folderId, cb) => {
if (folderExists === false) {
// Create the skill folder if it does not exist
drive.files.create(
{
resource: {
name: driveFolderName,
mimeType: folderMimeType
},
fields: 'id'
},
emailMessage: 'Hey, I created a new folder to wrap your new content, cheers. Leon.',
transferOwnership: true,
fileId: folderId
}, (err) => {
if (err) {
log.error(`Error during the folder permission creation: ${err}`)
return reject(err)
(err, folder) => {
if (err) {
log.error(`Error during the folder creation: ${err}`)
return reject(err)
}
folderId = folder.data.id
log.title('Synchronizer')
log.success(
`"${driveFolderName}" folder created on Google Drive`
)
// Give ownership
return drive.permissions.create(
{
resource: {
type: 'user',
role: 'owner',
emailAddress: this.sync.email
},
emailMessage:
'Hey, I created a new folder to wrap your new content, cheers. Leon.',
transferOwnership: true,
fileId: folderId
},
(err) => {
if (err) {
log.error(
`Error during the folder permission creation: ${err}`
)
return reject(err)
}
log.success(`"${driveFolderName}" ownership transferred`)
cb(null, folderId)
return true
}
)
}
log.success(`"${driveFolderName}" ownership transferred`)
cb(null, folderId)
return true
})
})
} else {
return cb(null, folderId)
)
} else {
return cb(null, folderId)
}
return false
},
(folderId, cb) => {
let iEntities = 0
const upload = (i) => {
drive.files.create(
{
resource: {
name: entities[i],
parents: [folderId]
},
media: {
body: fs.createReadStream(
`${this.downloadDir}/${entities[i]}`
)
},
fields: 'id'
},
(err) => {
if (err) {
log.error(
`Error during the "${entities[i]}" file creation: ${err}`
)
return reject(err)
}
iEntities += 1
log.title('Synchronizer')
log.success(`"${entities[i]}" file added to Google Drive`)
if (iEntities === entities.length) {
cb(null)
}
return true
}
)
}
// Browse entities in Leon's memory
for (let i = 0; i < entities.length; i += 1) {
// Upload file to Drive
upload(i)
}
}
return false
},
(folderId, cb) => {
let iEntities = 0
const upload = (i) => {
drive.files.create({
resource: {
name: entities[i],
parents: [folderId]
},
media: {
body: fs.createReadStream(`${this.downloadDir}/${entities[i]}`)
},
fields: 'id'
}, (err) => {
if (err) {
log.error(`Error during the "${entities[i]}" file creation: ${err}`)
return reject(err)
}
iEntities += 1
log.title('Synchronizer'); log.success(`"${entities[i]}" file added to Google Drive`)
if (iEntities === entities.length) {
cb(null)
}
return true
})
}
// Browse entities in Leon's memory
for (let i = 0; i < entities.length; i += 1) {
// Upload file to Drive
upload(i)
],
(err) => {
if (err) {
log.error(err)
return reject(err)
}
// Content available on Google Drive
resolve()
return true
}
], (err) => {
if (err) {
log.error(err)
return reject(err)
}
// Content available on Google Drive
resolve()
return true
})
)
})
}
}

View File

@ -6,7 +6,7 @@ import log from '@/helpers/log'
const interval = 3000 // ms
export default class TcpClient {
constructor (host, port) {
constructor(host, port) {
this.host = host
this.port = port
this.reconnectCounter = 0
@ -68,19 +68,19 @@ export default class TcpClient {
}, interval)
}
get status () {
get status() {
return this._status
}
get ee () {
get ee() {
return this._ee
}
get isConnected () {
get isConnected() {
return this._isConnected
}
emit (topic, data) {
emit(topic, data) {
const obj = {
topic,
data
@ -89,7 +89,7 @@ export default class TcpClient {
this.tcpSocket.write(JSON.stringify(obj))
}
connect () {
connect() {
this.tcpSocket.connect({
host: this.host,
port: this.port

View File

@ -9,4 +9,4 @@ declare global {
var tcpClient: TcpClient
}
export { }
export {}

View File

@ -1,13 +1,16 @@
import moment from 'moment-timezone'
const date = { }
const date = {}
date.dateTime = () => moment().tz(date.timeZone()).format()
date.timeZone = () => {
let timeZone = moment.tz.guess()
if (process.env.LEON_TIME_ZONE && !!moment.tz.zone(process.env.LEON_TIME_ZONE)) {
if (
process.env.LEON_TIME_ZONE &&
!!moment.tz.zone(process.env.LEON_TIME_ZONE)
) {
timeZone = process.env.LEON_TIME_ZONE
}

View File

@ -1,57 +1,63 @@
import fs from 'fs'
import path from 'path'
const domain = { }
const domain = {}
const domainsDir = path.join(process.cwd(), 'skills')
domain.getDomainsObj = async () => {
const domainsObj = { }
const domainsObj = {}
await Promise.all(fs.readdirSync(domainsDir).map(async (entity) => {
const domainPath = path.join(domainsDir, entity)
await Promise.all(
fs.readdirSync(domainsDir).map(async (entity) => {
const domainPath = path.join(domainsDir, entity)
if (fs.statSync(domainPath).isDirectory()) {
const skillObj = { }
const { name: domainName } = await import(path.join(domainPath, 'domain.json'))
const skillFolders = fs.readdirSync(domainPath)
if (fs.statSync(domainPath).isDirectory()) {
const skillObj = {}
const { name: domainName } = await import(
path.join(domainPath, 'domain.json')
)
const skillFolders = fs.readdirSync(domainPath)
for (let i = 0; i < skillFolders.length; i += 1) {
const skillPath = path.join(domainPath, skillFolders[i])
for (let i = 0; i < skillFolders.length; i += 1) {
const skillPath = path.join(domainPath, skillFolders[i])
if (fs.statSync(skillPath).isDirectory()) {
const { name: skillName, bridge: skillBridge } = JSON.parse(fs.readFileSync(path.join(skillPath, 'skill.json'), 'utf8'))
if (fs.statSync(skillPath).isDirectory()) {
const { name: skillName, bridge: skillBridge } = JSON.parse(
fs.readFileSync(path.join(skillPath, 'skill.json'), 'utf8')
)
skillObj[skillName] = {
name: skillFolders[i],
path: skillPath,
bridge: skillBridge
skillObj[skillName] = {
name: skillFolders[i],
path: skillPath,
bridge: skillBridge
}
}
domainsObj[domainName] = {
name: entity,
path: domainPath,
skills: skillObj
}
}
domainsObj[domainName] = {
name: entity,
path: domainPath,
skills: skillObj
}
}
}
return null
}))
return null
})
)
return domainsObj
}
domain.list = async () => Object.keys(await domain.getDomainsObj())
domain.getDomainInfo = (domain) => JSON.parse(fs.readFileSync(
path.join(domainsDir, domain, 'domain.json'),
'utf8'
))
domain.getDomainInfo = (domain) =>
JSON.parse(
fs.readFileSync(path.join(domainsDir, domain, 'domain.json'), 'utf8')
)
domain.getSkillInfo = (domain, skill) => JSON.parse(fs.readFileSync(
path.join(domainsDir, domain, skill, 'skill.json'),
'utf8'
))
domain.getSkillInfo = (domain, skill) =>
JSON.parse(
fs.readFileSync(path.join(domainsDir, domain, skill, 'skill.json'), 'utf8')
)
export default domain

View File

@ -1,7 +1,7 @@
import fs from 'fs'
import path from 'path'
const json = { }
const json = {}
json.loadConfigData = async (configFilePath, lang) => {
const sharedDataPath = path.join(process.cwd(), 'core/data', lang)
@ -14,7 +14,9 @@ json.loadConfigData = async (configFilePath, lang) => {
entitiesKeys.forEach((entity) => {
if (typeof entities[entity] === 'string') {
entities[entity] = JSON.parse(fs.readFileSync(path.join(sharedDataPath, entities[entity]), 'utf8'))
entities[entity] = JSON.parse(
fs.readFileSync(path.join(sharedDataPath, entities[entity]), 'utf8')
)
}
})

View File

@ -1,6 +1,6 @@
import { langs } from '@@/core/langs.json'
const lang = { }
const lang = {}
lang.getShortLangs = () => Object.keys(langs).map((lang) => langs[lang].short)

View File

@ -6,11 +6,11 @@ const sentences = [
'This process takes time, please go for a coffee (or a fruit juice)',
'This may take a while, grab a drink and come back later',
'Go for a walk, this action takes time',
'That may take some time, let\'s chill and relax',
"That may take some time, let's chill and relax",
'Leon will be ready for you in a moment'
]
const spinner = new Spinner('\x1b[95m%s\x1b[0m\r').setSpinnerString(18)
const loader = { }
const loader = {}
let intervalId = 0
/**

View File

@ -2,7 +2,7 @@ import fs from 'fs'
import date from '@/helpers/date'
const log = { }
const log = {}
log.success = (value) => console.log('\x1b[32m✅ %s\x1b[0m', value)
@ -33,7 +33,8 @@ log.warning = (value) => console.warn('\x1b[33m⚠ %s\x1b[0m', value)
log.debug = (value) => console.info('\u001b[35m🐞 [DEBUG] %s\x1b[0m', value)
log.title = (value) => console.log('\n\n\x1b[7m.: %s :.\x1b[0m', value.toUpperCase())
log.title = (value) =>
console.log('\n\n\x1b[7m.: %s :.\x1b[0m', value.toUpperCase())
log.default = (value) => console.log('%s', value)

View File

@ -1,6 +1,6 @@
import o from 'os'
const os = { }
const os = {}
/**
* Returns information about your OS

View File

@ -1,9 +1,13 @@
const string = { }
const string = {}
/**
* Parse, map (with object) and replace value(s) in a string
*/
string.pnr = (s, obj) => s.replace(new RegExp(Object.keys(obj).join('|'), 'gi'), (matched) => obj[matched])
string.pnr = (s, obj) =>
s.replace(
new RegExp(Object.keys(obj).join('|'), 'gi'),
(matched) => obj[matched]
)
/**
* Uppercase for the first letter
@ -13,7 +17,11 @@ string.ucfirst = (s) => s.charAt(0).toUpperCase() + s.substr(1)
/**
* Transform snake_case string to PascalCase
*/
string.snakeToPascalCase = (s) => s.split('_').map((chunk) => string.ucfirst(chunk)).join('')
string.snakeToPascalCase = (s) =>
s
.split('_')
.map((chunk) => string.ucfirst(chunk))
.join('')
/**
* Random string

View File

@ -4,13 +4,17 @@ import { spawn } from 'child_process'
import lang from '@/helpers/lang'
import TcpClient from '@/core/tcp-client'
import server from '@/core/http-server/server'
(async () => {
;(async () => {
dotenv.config()
process.title = 'leon'
global.tcpServerProcess = spawn(`pipenv run python bridges/python/tcp_server/main.py ${lang.getShortCode(process.env['LEON_LANG'])}`, { shell: true })
global.tcpServerProcess = spawn(
`pipenv run python bridges/python/tcp_server/main.py ${lang.getShortCode(
process.env['LEON_LANG']
)}`,
{ shell: true }
)
global.tcpClient = new TcpClient(
process.env['LEON_PY_TCP_SERVER_HOST'],

View File

@ -6,8 +6,8 @@ import log from '@/helpers/log'
log.title('Coqui STT Parser')
const parser = { }
let model = { }
const parser = {}
let model = {}
let desiredSampleRate = 16000
/**
@ -25,13 +25,17 @@ parser.init = (args) => {
log.info(`Loading model from file ${args.model}...`)
if (!fs.existsSync(args.model)) {
log.error(`Cannot find ${args.model}. You can setup the offline STT by running: "npm run setup:offline-stt"`)
log.error(
`Cannot find ${args.model}. You can setup the offline STT by running: "npm run setup:offline-stt"`
)
return false
}
if (!fs.existsSync(args.scorer)) {
log.error(`Cannot find ${args.scorer}. You can setup the offline STT by running: "npm run setup:offline-stt"`)
log.error(
`Cannot find ${args.scorer}. You can setup the offline STT by running: "npm run setup:offline-stt"`
)
return false
}
@ -64,7 +68,9 @@ parser.parse = (buffer, cb) => {
const wavDecode = wav.decode(buffer)
if (wavDecode.sampleRate < desiredSampleRate) {
log.warning(`Original sample rate (${wavDecode.sampleRate}) is lower than ${desiredSampleRate}Hz. Up-sampling might produce erratic speech recognition`)
log.warning(
`Original sample rate (${wavDecode.sampleRate}) is lower than ${desiredSampleRate}Hz. Up-sampling might produce erratic speech recognition`
)
}
/* istanbul ignore if */

View File

@ -5,8 +5,8 @@ import log from '@/helpers/log'
log.title('Google Cloud STT Parser')
const parser = { }
let client = { }
const parser = {}
let client = {}
parser.conf = {
languageCode: process.env.LEON_LANG,
@ -19,7 +19,10 @@ parser.conf = {
* the env variable "GOOGLE_APPLICATION_CREDENTIALS" provides the JSON file path
*/
parser.init = () => {
process.env.GOOGLE_APPLICATION_CREDENTIALS = path.join(process.cwd(), 'core/config/voice/google-cloud.json')
process.env.GOOGLE_APPLICATION_CREDENTIALS = path.join(
process.cwd(),
'core/config/voice/google-cloud.json'
)
try {
client = new stt.SpeechClient()
@ -42,7 +45,9 @@ parser.parse = async (buffer, cb) => {
audio,
config: parser.conf
})
const string = res[0].results.map((data) => data.alternatives[0].transcript).join('\n')
const string = res[0].results
.map((data) => data.alternatives[0].transcript)
.join('\n')
cb({ string })
} catch (e) {

View File

@ -5,15 +5,11 @@ import Asr from '@/core/asr'
import log from '@/helpers/log'
class Stt {
constructor (socket, provider) {
constructor(socket, provider) {
this.socket = socket
this.provider = provider
this.providers = [
'google-cloud-stt',
'watson-stt',
'coqui-stt'
]
this.parser = { }
this.providers = ['google-cloud-stt', 'watson-stt', 'coqui-stt']
this.parser = {}
log.title('STT')
log.success('New instance')
@ -22,21 +18,35 @@ class Stt {
/**
* Initialize the STT provider
*/
init (cb) {
init(cb) {
log.info('Initializing STT...')
if (!this.providers.includes(this.provider)) {
log.error(`The STT provider "${this.provider}" does not exist or is not yet supported`)
log.error(
`The STT provider "${this.provider}" does not exist or is not yet supported`
)
return false
}
/* istanbul ignore next */
if (this.provider === 'google-cloud-stt' && typeof process.env.GOOGLE_APPLICATION_CREDENTIALS === 'undefined') {
process.env.GOOGLE_APPLICATION_CREDENTIALS = path.join(process.cwd(), 'core/config/voice/google-cloud.json')
} else if (typeof process.env.GOOGLE_APPLICATION_CREDENTIALS !== 'undefined'
&& process.env.GOOGLE_APPLICATION_CREDENTIALS.indexOf('google-cloud.json') === -1) {
log.warning(`The "GOOGLE_APPLICATION_CREDENTIALS" env variable is already settled with the following value: "${process.env.GOOGLE_APPLICATION_CREDENTIALS}"`)
if (
this.provider === 'google-cloud-stt' &&
typeof process.env.GOOGLE_APPLICATION_CREDENTIALS === 'undefined'
) {
process.env.GOOGLE_APPLICATION_CREDENTIALS = path.join(
process.cwd(),
'core/config/voice/google-cloud.json'
)
} else if (
typeof process.env.GOOGLE_APPLICATION_CREDENTIALS !== 'undefined' &&
process.env.GOOGLE_APPLICATION_CREDENTIALS.indexOf(
'google-cloud.json'
) === -1
) {
log.warning(
`The "GOOGLE_APPLICATION_CREDENTIALS" env variable is already settled with the following value: "${process.env.GOOGLE_APPLICATION_CREDENTIALS}"`
)
}
/* istanbul ignore if */
@ -58,7 +68,7 @@ class Stt {
* Forward string output to the client
* and delete audio files once it has been forwarded
*/
forward (string) {
forward(string) {
this.socket.emit('recognized', string, (confirmation) => {
/* istanbul ignore next */
if (confirmation === 'string-received') {
@ -72,7 +82,7 @@ class Stt {
/**
* Read the speech file and parse
*/
parse (file) {
parse(file) {
log.info('Parsing WAVE file...')
if (!fs.existsSync(file)) {
@ -100,7 +110,7 @@ class Stt {
/**
* Delete audio files
*/
static deleteAudios () {
static deleteAudios() {
return new Promise((resolve) => {
const audios = Object.keys(Asr.audios)
@ -111,7 +121,7 @@ class Stt {
fs.unlinkSync(Asr.audios[audios[i]])
}
if ((i + 1) === audios.length) {
if (i + 1 === audios.length) {
resolve()
}
}

View File

@ -8,8 +8,8 @@ import log from '@/helpers/log'
log.title('Watson STT Parser')
const parser = { }
let client = { }
const parser = {}
let client = {}
parser.conf = {
contentType: 'audio/wav',
@ -20,7 +20,12 @@ parser.conf = {
* Initialize Watson Speech-to-Text based on credentials in the JSON file
*/
parser.init = () => {
const config = JSON.parse(fs.readFileSync(path.join(process.cwd(), 'core/config/voice/watson-stt.json'), 'utf8'))
const config = JSON.parse(
fs.readFileSync(
path.join(process.cwd(), 'core/config/voice/watson-stt.json'),
'utf8'
)
)
try {
client = new Stt({
@ -43,9 +48,12 @@ parser.parse = async (buffer, cb) => {
stream.push(null)
parser.conf.audio = stream
client.recognize(parser.conf)
client
.recognize(parser.conf)
.then(({ result }) => {
const string = result.results.map((data) => data.alternatives[0].transcript).join('\n')
const string = result.results
.map((data) => data.alternatives[0].transcript)
.join('\n')
cb({ string })
})
@ -57,7 +65,9 @@ parser.parse = async (buffer, cb) => {
if (err) {
log.error(`Watson STT: ${err}`)
} else {
const string = res.results.map((data) => data.alternatives[0].transcript).join('\n')
const string = res.results
.map((data) => data.alternatives[0].transcript)
.join('\n')
cb({ string })
}

View File

@ -10,7 +10,7 @@ import string from '@/helpers/string'
log.title('Amazon Polly Synthesizer')
const synthesizer = { }
const synthesizer = {}
const voices = {
'en-US': {
VoiceId: 'Matthew'
@ -19,7 +19,7 @@ const voices = {
VoiceId: 'Mathieu'
}
}
let client = { }
let client = {}
synthesizer.conf = {
OutputFormat: 'mp3',
@ -30,7 +30,12 @@ synthesizer.conf = {
* Initialize Amazon Polly based on credentials in the JSON file
*/
synthesizer.init = (lang) => {
const config = JSON.parse(fs.readFileSync(path.join(process.cwd(), 'core/config/voice/amazon.json'), 'utf8'))
const config = JSON.parse(
fs.readFileSync(
path.join(process.cwd(), 'core/config/voice/amazon.json'),
'utf8'
)
)
synthesizer.conf.VoiceId = voices[lang].VoiceId
try {
@ -50,7 +55,8 @@ synthesizer.save = (speech, em, cb) => {
synthesizer.conf.Text = speech
client.send(new SynthesizeSpeechCommand(synthesizer.conf))
client
.send(new SynthesizeSpeechCommand(synthesizer.conf))
.then(({ AudioStream }) => {
const wStream = fs.createWriteStream(file)
@ -78,7 +84,9 @@ synthesizer.save = (speech, em, cb) => {
})
.catch((err) => {
if (err.code === 'UnknownEndpoint') {
log.error(`Amazon Polly: the region "${err.region}" does not exist or does not support the Polly service`)
log.error(
`Amazon Polly: the region "${err.region}" does not exist or does not support the Polly service`
)
} else {
log.error(`Amazon Polly: ${err.message}`)
}

View File

@ -9,7 +9,7 @@ import string from '@/helpers/string'
log.title('Flite Synthesizer')
const synthesizer = { }
const synthesizer = {}
synthesizer.conf = {
int_f0_target_mean: 115.0, // Intonation (85-180 Hz men; 165-255 Hz women)
@ -26,12 +26,16 @@ synthesizer.init = (lang) => {
/* istanbul ignore if */
if (lang !== 'en-US') {
log.warning('The Flite synthesizer only accepts the "en-US" language for the moment')
log.warning(
'The Flite synthesizer only accepts the "en-US" language for the moment'
)
}
/* istanbul ignore if */
if (!fs.existsSync(flitePath)) {
log.error(`Cannot find ${flitePath} You can setup the offline TTS by running: "npm run setup:offline-tts"`)
log.error(
`Cannot find ${flitePath} You can setup the offline TTS by running: "npm run setup:offline-tts"`
)
return false
}
@ -47,11 +51,16 @@ synthesizer.save = (speech, em, cb) => {
const file = `${__dirname}/../../tmp/${Date.now()}-${string.random(4)}.wav`
const process = spawn('bin/flite/flite', [
speech,
'--setf', `int_f0_target_mean=${synthesizer.conf.int_f0_target_mean}`,
'--setf', `f0_shift=${synthesizer.conf.f0_shift}`,
'--setf', `duration_stretch=${synthesizer.conf.duration_stretch}`,
'--setf', `int_f0_target_stddev=${synthesizer.conf.int_f0_target_stddev}`,
'-o', file
'--setf',
`int_f0_target_mean=${synthesizer.conf.int_f0_target_mean}`,
'--setf',
`f0_shift=${synthesizer.conf.f0_shift}`,
'--setf',
`duration_stretch=${synthesizer.conf.duration_stretch}`,
'--setf',
`int_f0_target_stddev=${synthesizer.conf.int_f0_target_stddev}`,
'-o',
file
])
/* istanbul ignore next */

View File

@ -10,7 +10,7 @@ import string from '@/helpers/string'
log.title('Google Cloud TTS Synthesizer')
const synthesizer = { }
const synthesizer = {}
const voices = {
'en-US': {
languageCode: 'en-US',
@ -24,7 +24,7 @@ const voices = {
ssmlGender: 'MALE'
}
}
let client = { }
let client = {}
synthesizer.conf = {
voice: '',
@ -38,7 +38,10 @@ synthesizer.conf = {
* The env variable "GOOGLE_APPLICATION_CREDENTIALS" provides the JSON file path
*/
synthesizer.init = (lang) => {
process.env.GOOGLE_APPLICATION_CREDENTIALS = path.join(process.cwd(), 'core/config/voice/google-cloud.json')
process.env.GOOGLE_APPLICATION_CREDENTIALS = path.join(
process.cwd(),
'core/config/voice/google-cloud.json'
)
synthesizer.conf.voice = voices[lang]
try {

View File

@ -6,16 +6,11 @@ import log from '@/helpers/log'
import lang from '@/helpers/lang'
class Tts {
constructor (socket, provider) {
constructor(socket, provider) {
this.socket = socket
this.provider = provider
this.providers = [
'flite',
'google-cloud-tts',
'amazon-polly',
'watson-tts'
]
this.synthesizer = { }
this.providers = ['flite', 'google-cloud-tts', 'amazon-polly', 'watson-tts']
this.synthesizer = {}
this.em = new events.EventEmitter()
this.speeches = []
this.lang = 'en'
@ -27,23 +22,37 @@ class Tts {
/**
* Initialize the TTS provider
*/
init (newLang, cb) {
init(newLang, cb) {
log.info('Initializing TTS...')
this.lang = newLang || this.lang
if (!this.providers.includes(this.provider)) {
log.error(`The TTS provider "${this.provider}" does not exist or is not yet supported`)
log.error(
`The TTS provider "${this.provider}" does not exist or is not yet supported`
)
return false
}
/* istanbul ignore next */
if (this.provider === 'google-cloud-tts' && typeof process.env.GOOGLE_APPLICATION_CREDENTIALS === 'undefined') {
process.env.GOOGLE_APPLICATION_CREDENTIALS = path.join(process.cwd(), 'core/config/voice/google-cloud.json')
} else if (typeof process.env.GOOGLE_APPLICATION_CREDENTIALS !== 'undefined'
&& process.env.GOOGLE_APPLICATION_CREDENTIALS.indexOf('google-cloud.json') === -1) {
log.warning(`The "GOOGLE_APPLICATION_CREDENTIALS" env variable is already settled with the following value: "${process.env.GOOGLE_APPLICATION_CREDENTIALS}"`)
if (
this.provider === 'google-cloud-tts' &&
typeof process.env.GOOGLE_APPLICATION_CREDENTIALS === 'undefined'
) {
process.env.GOOGLE_APPLICATION_CREDENTIALS = path.join(
process.cwd(),
'core/config/voice/google-cloud.json'
)
} else if (
typeof process.env.GOOGLE_APPLICATION_CREDENTIALS !== 'undefined' &&
process.env.GOOGLE_APPLICATION_CREDENTIALS.indexOf(
'google-cloud.json'
) === -1
) {
log.warning(
`The "GOOGLE_APPLICATION_CREDENTIALS" env variable is already settled with the following value: "${process.env.GOOGLE_APPLICATION_CREDENTIALS}"`
)
}
// Dynamically attribute the synthesizer
@ -64,20 +73,24 @@ class Tts {
* Forward buffer audio file and duration to the client
* and delete audio file once it has been forwarded
*/
forward (speech) {
forward(speech) {
this.synthesizer.default.save(speech.text, this.em, (file, duration) => {
/* istanbul ignore next */
const bitmap = fs.readFileSync(file)
/* istanbul ignore next */
this.socket.emit('audio-forwarded', {
buffer: Buffer.from(bitmap),
is_final_answer: speech.isFinalAnswer,
duration
}, (confirmation) => {
if (confirmation === 'audio-received') {
fs.unlinkSync(file)
this.socket.emit(
'audio-forwarded',
{
buffer: Buffer.from(bitmap),
is_final_answer: speech.isFinalAnswer,
duration
},
(confirmation) => {
if (confirmation === 'audio-received') {
fs.unlinkSync(file)
}
}
})
)
})
}
@ -85,7 +98,7 @@ class Tts {
* When the synthesizer saved a new audio file
* then shift the queue according to the audio file duration
*/
onSaved () {
onSaved() {
return new Promise((resolve) => {
this.em.on('saved', (duration) => {
setTimeout(() => {
@ -104,7 +117,7 @@ class Tts {
/**
* Add speeches to the queue
*/
add (text, isFinalAnswer) {
add(text, isFinalAnswer) {
/**
* Flite fix. When the string is only one word,
* Flite cannot save to a file. So we add a space at the end of the string

View File

@ -11,7 +11,7 @@ import string from '@/helpers/string'
log.title('Watson TTS Synthesizer')
const synthesizer = { }
const synthesizer = {}
const voices = {
'en-US': {
voice: 'en-US_MichaelV3Voice'
@ -20,7 +20,7 @@ const voices = {
voice: 'fr-FR_NicolasV3Voice'
}
}
let client = { }
let client = {}
synthesizer.conf = {
voice: '',
@ -31,7 +31,12 @@ synthesizer.conf = {
* Initialize Watson Text-to-Speech based on credentials in the JSON file
*/
synthesizer.init = (lang) => {
const config = JSON.parse(fs.readFileSync(path.join(process.cwd(), 'core/config/voice/watson-tts.json'), 'utf8'))
const config = JSON.parse(
fs.readFileSync(
path.join(process.cwd(), 'core/config/voice/watson-tts.json'),
'utf8'
)
)
synthesizer.conf.voice = voices[lang].voice
try {
@ -54,7 +59,8 @@ synthesizer.save = (speech, em, cb) => {
synthesizer.conf.text = speech
client.synthesize(synthesizer.conf)
client
.synthesize(synthesizer.conf)
.then(({ result }) => {
const wStream = fs.createWriteStream(file)

View File

@ -16,11 +16,7 @@
"What thematic would you like to play with?",
"What thematic do you choose?"
],
"suggestions": [
"Characters",
"Objects",
"Animals"
]
"suggestions": ["Characters", "Objects", "Animals"]
}
],
"entities": [
@ -54,13 +50,7 @@
"name": "answer"
}
},
"suggestions": [
"Yes",
"No",
"Don't know",
"Probably",
"Probably not"
],
"suggestions": ["Yes", "No", "Don't know", "Probably", "Probably not"],
"next_action": "retry"
},
"retry": {
@ -71,27 +61,18 @@
"name": "affirmation_denial"
}
},
"suggestions": [
"Yes",
"No thanks"
]
"suggestions": ["Yes", "No thanks"]
}
},
"resolvers": {
"answer": {
"intents": {
"yes": {
"utterance_samples": [
"[Yes|Yep|Yup|Yeah]",
"Sure",
"Correct"
],
"utterance_samples": ["[Yes|Yep|Yup|Yeah]", "Sure", "Correct"],
"value": "y"
},
"no": {
"utterance_samples": [
"[No|Nope|Nah]"
],
"utterance_samples": ["[No|Nope|Nah]"],
"value": "n"
},
"idk": {
@ -105,16 +86,11 @@
"value": "idk"
},
"probably": {
"utterance_samples": [
"Probably",
"Probably yes"
],
"utterance_samples": ["Probably", "Probably yes"],
"value": "p"
},
"probably_not": {
"utterance_samples": [
"Probably [no|not]"
],
"utterance_samples": ["Probably [no|not]"],
"value": "pn"
}
}
@ -142,10 +118,7 @@
"Do you want to go for another one?",
"Should we go for another try?"
],
"confirm_retry": [
"Gotcha!",
"Let's go for another try then!"
],
"confirm_retry": ["Gotcha!", "Let's go for another try then!"],
"deny_retry": [
"Got it, take care.",
"Let me know anytime you want I call Akinator."

View File

@ -28,24 +28,15 @@
"name": "affirmation_denial"
}
},
"suggestions": [
"Yes",
"No thanks"
]
"suggestions": ["Yes", "No thanks"]
}
},
"answers": {
"ready": [
"Alright, I'm ready! Go ahead and guess the number between 0 and 100!"
],
"bigger": [
"The number is bigger.",
"Try with a bigger number."
],
"smaller": [
"It is smaller.",
"Try a smaller number."
],
"bigger": ["The number is bigger.", "Try with a bigger number."],
"smaller": ["It is smaller.", "Try a smaller number."],
"guessed": [
"Congrats! The number was %nb% and you guessed in %attempts_nb% attempts. Ready for another round?"
],
@ -55,10 +46,6 @@
"Let me pick up a number...",
"Gotcha, I'm picking up a number..."
],
"stop": [
"Sure, as you wish.",
"You said it.",
"Let's stop here then!"
]
"stop": ["Sure, as you wish.", "You said it.", "Let's stop here then!"]
}
}

View File

@ -23,11 +23,7 @@
"name": "handsign"
}
},
"suggestions": [
"Rock ✊",
"Paper ✋",
"Scissors ✌"
],
"suggestions": ["Rock ✊", "Paper ✋", "Scissors ✌"],
"entities": [
{
"type": "enum",
@ -55,23 +51,13 @@
"name": "affirmation_denial"
}
},
"suggestions": [
"Yes",
"No thanks"
]
"suggestions": ["Yes", "No thanks"]
}
},
"answers": {
"ready": [
"Let's get started!"
],
"leon_emoji": [
"%leon_emoji%"
],
"equal": [
"No point.",
"It's a tie."
],
"ready": ["Let's get started!"],
"leon_emoji": ["%leon_emoji%"],
"equal": ["No point.", "It's a tie."],
"point_for_leon": [
"I got you. The %handsign_1% beats the %handsign_2%.",
"Yeaaah, I won! The %handsign_1% beats the %handsign_2%.",
@ -86,13 +72,7 @@
"Do you want a rematch?",
"Should we go for another round?"
],
"confirm_rematch": [
"Be ready!",
"I'm not gonna let you win."
],
"deny_rematch": [
"As you wish.",
"Let me know anytime you want to play."
]
"confirm_rematch": ["Be ready!", "I'm not gonna let you win."],
"deny_rematch": ["As you wish.", "Let me know anytime you want to play."]
}
}

View File

@ -28,9 +28,7 @@
"Why not another one?",
"Why?"
],
"answers": [
"Because blue and pink are beautiful. Look at my logo..."
]
"answers": ["Because blue and pink are beautiful. Look at my logo..."]
},
"color_hexadecimal": {
"type": "dialog",

Some files were not shown because too many files have changed in this diff Show More