Create a new issue if a page doesn't exist
# create a new issue if not found (requires hub cli client)
# checks brew and cargo for homepage + description
function tld {
tldr $* 2>/dev/null && return
local repo=~/code/tldr # replace with location to tldr repo
local info=("${(@f)$(brew info --json=v1 $1 2>/dev/null | jq -r '.[].homepage,.[].desc')}")
test $#info -gt 1 || info=("${(@f)$(cargo show $1 2>/dev/null | awk '/^homepage|description/ { $1=""; print }')}")
test $#info -gt 1 || return
hub -C $repo issue | grep $1 && return
hub -C $repo issue create -F <(echo "page request: $1\n\nAdd documentation for [$1]($info[1])\n$info[2]")
}
Find outdated and missing pages
A website containing a table listing all tldr pages and information about their translations.
- The translation for a language is completed if there's a green checkmark.
- A yellow sign warns you about a translation which may be outdated. (We compare the number of entries in the English version of the page and the translated page.)
- There's no translation for a given page if the cell is marked with a red cross.
The table itself is updated daily by the tool TldrProgress. You can report issues and open pull requests at the tool's repository.
Hint: Search faster for a page by using your browsers search, press Ctrl + F to open it.
https://lukwebsforge.github.io/tldri18n/
Find pages that don't exist
A Bash shell script that finds pages that haven't been created yet can be found here. It has 2 modes:
history
- Searches your ~/.bash_historyman
- Searches the installed man pages
For apt
-based Linux systems, this apt repository contains it as the tldr-missing-pages
package.
A fork that extends upon this functionality can be found here. In addition to the ones listed above, it contains two other modes:
zhistory
- Searches your ~./zsh_historycommands
- Searches all commands that can be executed on the system
Yet another project exists at https://gist.github.com/iTrooz/8a164ea2821fb8b1f017c207ee3627dc. It is a python script that needs to be executed in the root of this repository once cloned. It supports bash and zsh, and will sort commands not in the tldr db to show you the ones you use the most (and so know the most) first
Translation helper: find next page to translate
A Python script that finds the next page (in alphabetical order) that needs to be translated from English to the chosen target language, copies it in the correct folder and optionally opens it in the default text editor (i.e. the command editor
). To use, simply download and run the script inside the tldr
repository after cloning it.
Script available here: tldr_translate_next_page.py.
usage: ./tldr_translate_next_page.py [-h] [-c] LANGUAGE
positional arguments:
LANGUAGE target language (e.g. it)
optional arguments:
-h, --help show this help message and exit
-c, --copy-only only copy the file, without opening it in the text editor
Page creation tool helper
A simple Go program to help the creation of TLDR pages which fixes some of the syntax for you as you write the pages through an interactive creation.
Installation instructions and the code can be found here
Detect broken "More information" links
More information URLs can be subject to link rot, so it can be important to check that all the links in all tldr pages are still valid from time to time. Given tldr-pages now hosts thousands of pages in dozens of languages, it is natural to automate this process.
The following Bash one-liner will check all the links in all pages in the current directory:
find . -type f -iname '*.md' -print0 | xargs -0 cat | awk '/> More information/ { match($0, /<(.*)>/, arr); print(arr[1]); }' | sort | uniq | shuf | xargs -n1 -I{} bash -c 'url="{}"; code="$(curl --user-agent "curl; bash; xargs; tldr-pages-bad-url-checker (+https://github.com/tldr-pages/tldr; implemented by @sbrl)" -sSL -o /dev/null -w "%{http_code}" --fail -I "${url}")"; echo "${code} ${url}" >&2; if [[ "${code}" -lt 200 ]] || [[ "${code}" -ge 400 ]]; then echo "${url}"; fi' >/tmp/bad-urls.txt;
Ref issue #5116, where this one-liner was first drafted.
Once a list of dead links is acquired, it can then be sorted through an a pull request can be opened to replace them with new links.