Browsers have a limit on how big you can make strings. In Chrome on a 64 bit machine, this is around 512MB, which explains why in #340, a 600MB file fails to load.
To work around this issue, we avoid making strings this large.
To do so, we need two core changes:
1. Instead of sending large strings as the import mechanism to different file format importers, we introduce a new `TextFileContent` interface which exposes methods to get the lines in the file or the JSON representation. In the case of line splitting, we assume that no single line exceeds the 512MB limit.
2. We introduce a dependency on https://github.com/evanw/uint8array-json-parser to allow us to parse JSON files contained in `Uint8Array` objects
To ensure that this code doesn't code rot without introducing 600MB test files or test file generation into the repository, we also re-run a small set of tests with a mocked maximum string size of 100 bytes. You can see that the chunked string representation code is getting executed via test coverage.
Fixes#340
While it was kind of nice having everything at the top level, the number of files is now getting a bit unwieldy and hard to understand, so I took a stab at organizing the directories without introducing too much nesting.
Test Plan:
- Ran `npm run serve` to ensure that local builds still work
- Ran `npm run prepack` then `open dist/release/index.html` to ensure that release builds still work
- Ran `scripts/deploy.sh` to ensure that the deployed version of the site will still work when I eventually redeploy
- Ran `npm run jest` to ensure that tests still work correctly