;regen manuals

This commit is contained in:
Simon Michael 2020-06-22 12:20:14 -07:00
parent c8a84e3c96
commit 368297102d
7 changed files with 995 additions and 761 deletions

View File

@ -610,7 +610,7 @@ separator TAB
.fi
.PP
See also: File Extension.
.SS \f[C]if\f[R]
.SS \f[C]if\f[R] block
.IP
.nf
\f[C]
@ -702,6 +702,76 @@ banking thru software
comment XXX deductible ? check it
\f[R]
.fi
.SS \f[C]if\f[R] table
.IP
.nf
\f[C]
if,CSVFIELDNAME1,CSVFIELDNAME2,...,CSVFIELDNAMEn
MATCHER1,VALUE11,VALUE12,...,VALUE1n
MATCHER2,VALUE21,VALUE22,...,VALUE2n
MATCHER3,VALUE31,VALUE32,...,VALUE3n
<empty line>
\f[R]
.fi
.PP
Conditional tables (\[dq]if tables\[dq]) are a different syntax to
specify field assignments that will be applied only to CSV records which
match certain patterns.
.PP
MATCHER could be either field or record matcher, as described above.
When MATCHER matches, values from that row would be assigned to the CSV
fields named on the \f[C]if\f[R] line, in the same order.
.PP
Therefore \f[C]if\f[R] table is exactly equivalent to a sequence of of
\f[C]if\f[R] blocks:
.IP
.nf
\f[C]
if MATCHER1
CSVFIELDNAME1 VALUE11
CSVFIELDNAME2 VALUE12
...
CSVFIELDNAMEn VALUE1n
if MATCHER2
CSVFIELDNAME1 VALUE21
CSVFIELDNAME2 VALUE22
...
CSVFIELDNAMEn VALUE2n
if MATCHER3
CSVFIELDNAME1 VALUE31
CSVFIELDNAME2 VALUE32
...
CSVFIELDNAMEn VALUE3n
\f[R]
.fi
.PP
Each line starting with MATCHER should contain enough (possibly empty)
values for all the listed fields.
.PP
Rules would be checked and applied in the order they are listed in the
table and, like with \f[C]if\f[R] blocks, later rules (in the same or
another table) or \f[C]if\f[R] blocks could override the effect of any
rule.
.PP
Instead of \[aq],\[aq] you can use a variety of other non-alphanumeric
characters as a separator.
First character after \f[C]if\f[R] is taken to be the separator for the
rest of the table.
It is the responsibility of the user to ensure that separator does not
occur inside MATCHERs and values - there is no way to escape separator.
.PP
Example:
.IP
.nf
\f[C]
if,account2,comment
atm transaction fee,expenses:business:banking,deductible? check it
%description groceries,expenses:groceries,
2020/01/12.*Plumbing LLC,expenses:house:upkeep,emergency plumbing call-out
\f[R]
.fi
.SS \f[C]end\f[R]
.PP
This rule can be used inside if blocks (only), to make hledger stop

View File

@ -374,7 +374,8 @@ Blank lines and lines beginning with '#' or ';' are ignored.
* fields::
* field assignment::
* separator::
* if::
* if block::
* if table::
* end::
* date-format::
* newest-first::
@ -567,7 +568,7 @@ becomes '1' when interpolated) (#1051). See TIPS below for more about
referencing other fields.

File: hledger_csv.info, Node: separator, Next: if, Prev: field assignment, Up: CSV RULES
File: hledger_csv.info, Node: separator, Next: if block, Prev: field assignment, Up: CSV RULES
2.4 'separator'
===============
@ -587,10 +588,10 @@ separator TAB
See also: File Extension.

File: hledger_csv.info, Node: if, Next: end, Prev: separator, Up: CSV RULES
File: hledger_csv.info, Node: if block, Next: if table, Prev: separator, Up: CSV RULES
2.5 'if'
========
2.5 'if' block
==============
if MATCHER
RULE
@ -659,9 +660,70 @@ banking thru software
comment XXX deductible ? check it

File: hledger_csv.info, Node: end, Next: date-format, Prev: if, Up: CSV RULES
File: hledger_csv.info, Node: if table, Next: end, Prev: if block, Up: CSV RULES
2.6 'end'
2.6 'if' table
==============
if,CSVFIELDNAME1,CSVFIELDNAME2,...,CSVFIELDNAMEn
MATCHER1,VALUE11,VALUE12,...,VALUE1n
MATCHER2,VALUE21,VALUE22,...,VALUE2n
MATCHER3,VALUE31,VALUE32,...,VALUE3n
<empty line>
Conditional tables ("if tables") are a different syntax to specify
field assignments that will be applied only to CSV records which match
certain patterns.
MATCHER could be either field or record matcher, as described above.
When MATCHER matches, values from that row would be assigned to the CSV
fields named on the 'if' line, in the same order.
Therefore 'if' table is exactly equivalent to a sequence of of 'if'
blocks:
if MATCHER1
CSVFIELDNAME1 VALUE11
CSVFIELDNAME2 VALUE12
...
CSVFIELDNAMEn VALUE1n
if MATCHER2
CSVFIELDNAME1 VALUE21
CSVFIELDNAME2 VALUE22
...
CSVFIELDNAMEn VALUE2n
if MATCHER3
CSVFIELDNAME1 VALUE31
CSVFIELDNAME2 VALUE32
...
CSVFIELDNAMEn VALUE3n
Each line starting with MATCHER should contain enough (possibly
empty) values for all the listed fields.
Rules would be checked and applied in the order they are listed in
the table and, like with 'if' blocks, later rules (in the same or
another table) or 'if' blocks could override the effect of any rule.
Instead of ',' you can use a variety of other non-alphanumeric
characters as a separator. First character after 'if' is taken to be
the separator for the rest of the table. It is the responsibility of
the user to ensure that separator does not occur inside MATCHERs and
values - there is no way to escape separator.
Example:
if,account2,comment
atm transaction fee,expenses:business:banking,deductible? check it
%description groceries,expenses:groceries,
2020/01/12.*Plumbing LLC,expenses:house:upkeep,emergency plumbing call-out

File: hledger_csv.info, Node: end, Next: date-format, Prev: if table, Up: CSV RULES
2.7 'end'
=========
This rule can be used inside if blocks (only), to make hledger stop
@ -675,7 +737,7 @@ if ,,,,

File: hledger_csv.info, Node: date-format, Next: newest-first, Prev: end, Up: CSV RULES
2.7 'date-format'
2.8 'date-format'
=================
date-format DATEFMT
@ -706,7 +768,7 @@ https://hackage.haskell.org/package/time/docs/Data-Time-Format.html#v:formatTime

File: hledger_csv.info, Node: newest-first, Next: include, Prev: date-format, Up: CSV RULES
2.8 'newest-first'
2.9 'newest-first'
==================
hledger always sorts the generated transactions by date. Transactions
@ -728,8 +790,8 @@ newest-first

File: hledger_csv.info, Node: include, Next: balance-type, Prev: newest-first, Up: CSV RULES
2.9 'include'
=============
2.10 'include'
==============
include RULESFILE
@ -751,7 +813,7 @@ include categorisation.rules

File: hledger_csv.info, Node: balance-type, Prev: include, Up: CSV RULES
2.10 'balance-type'
2.11 'balance-type'
===================
Balance assertions generated by assigning to balanceN are of the simple
@ -1048,62 +1110,64 @@ Node: Paypal6570
Ref: #paypal6664
Node: CSV RULES14308
Ref: #csv-rules14417
Node: skip14693
Ref: #skip14786
Node: fields15161
Ref: #fields15283
Node: Transaction field names16448
Ref: #transaction-field-names16608
Node: Posting field names16719
Ref: #posting-field-names16871
Node: account16941
Ref: #account17057
Node: amount17594
Ref: #amount17725
Node: currency18832
Ref: #currency18967
Node: balance19173
Ref: #balance19307
Node: comment19624
Ref: #comment19741
Node: field assignment19904
Ref: #field-assignment20047
Node: separator20865
Ref: #separator20994
Node: if21405
Ref: #if21507
Node: end23663
Ref: #end23769
Node: date-format23993
Ref: #date-format24125
Node: newest-first24874
Ref: #newest-first25012
Node: include25695
Ref: #include25824
Node: balance-type26268
Ref: #balance-type26388
Node: TIPS27088
Ref: #tips27170
Node: Rapid feedback27426
Ref: #rapid-feedback27543
Node: Valid CSV28003
Ref: #valid-csv28133
Node: File Extension28325
Ref: #file-extension28477
Node: Reading multiple CSV files28887
Ref: #reading-multiple-csv-files29072
Node: Valid transactions29313
Ref: #valid-transactions29491
Node: Deduplicating importing30119
Ref: #deduplicating-importing30298
Node: Setting amounts31331
Ref: #setting-amounts31500
Node: Setting currency/commodity32487
Ref: #setting-currencycommodity32679
Node: Referencing other fields33482
Ref: #referencing-other-fields33682
Node: How CSV rules are evaluated34579
Ref: #how-csv-rules-are-evaluated34752
Node: skip14712
Ref: #skip14805
Node: fields15180
Ref: #fields15302
Node: Transaction field names16467
Ref: #transaction-field-names16627
Node: Posting field names16738
Ref: #posting-field-names16890
Node: account16960
Ref: #account17076
Node: amount17613
Ref: #amount17744
Node: currency18851
Ref: #currency18986
Node: balance19192
Ref: #balance19326
Node: comment19643
Ref: #comment19760
Node: field assignment19923
Ref: #field-assignment20066
Node: separator20884
Ref: #separator21019
Node: if block21430
Ref: #if-block21555
Node: if table23711
Ref: #if-table23830
Node: end25568
Ref: #end25680
Node: date-format25904
Ref: #date-format26036
Node: newest-first26785
Ref: #newest-first26923
Node: include27606
Ref: #include27737
Node: balance-type28181
Ref: #balance-type28301
Node: TIPS29001
Ref: #tips29083
Node: Rapid feedback29339
Ref: #rapid-feedback29456
Node: Valid CSV29916
Ref: #valid-csv30046
Node: File Extension30238
Ref: #file-extension30390
Node: Reading multiple CSV files30800
Ref: #reading-multiple-csv-files30985
Node: Valid transactions31226
Ref: #valid-transactions31404
Node: Deduplicating importing32032
Ref: #deduplicating-importing32211
Node: Setting amounts33244
Ref: #setting-amounts33413
Node: Setting currency/commodity34400
Ref: #setting-currencycommodity34592
Node: Referencing other fields35395
Ref: #referencing-other-fields35595
Node: How CSV rules are evaluated36492
Ref: #how-csv-rules-are-evaluated36665

End Tag Table

View File

@ -467,7 +467,7 @@ CSV RULES
See also: File Extension.
if
if block
if MATCHER
RULE
@ -535,8 +535,63 @@ CSV RULES
account2 expenses:business:banking
comment XXX deductible ? check it
if table
if,CSVFIELDNAME1,CSVFIELDNAME2,...,CSVFIELDNAMEn
MATCHER1,VALUE11,VALUE12,...,VALUE1n
MATCHER2,VALUE21,VALUE22,...,VALUE2n
MATCHER3,VALUE31,VALUE32,...,VALUE3n
<empty line>
Conditional tables ("if tables") are a different syntax to specify
field assignments that will be applied only to CSV records which match
certain patterns.
MATCHER could be either field or record matcher, as described above.
When MATCHER matches, values from that row would be assigned to the CSV
fields named on the if line, in the same order.
Therefore if table is exactly equivalent to a sequence of of if blocks:
if MATCHER1
CSVFIELDNAME1 VALUE11
CSVFIELDNAME2 VALUE12
...
CSVFIELDNAMEn VALUE1n
if MATCHER2
CSVFIELDNAME1 VALUE21
CSVFIELDNAME2 VALUE22
...
CSVFIELDNAMEn VALUE2n
if MATCHER3
CSVFIELDNAME1 VALUE31
CSVFIELDNAME2 VALUE32
...
CSVFIELDNAMEn VALUE3n
Each line starting with MATCHER should contain enough (possibly empty)
values for all the listed fields.
Rules would be checked and applied in the order they are listed in the
table and, like with if blocks, later rules (in the same or another ta-
ble) or if blocks could override the effect of any rule.
Instead of ',' you can use a variety of other non-alphanumeric charac-
ters as a separator. First character after if is taken to be the sepa-
rator for the rest of the table. It is the responsibility of the user
to ensure that separator does not occur inside MATCHERs and values -
there is no way to escape separator.
Example:
if,account2,comment
atm transaction fee,expenses:business:banking,deductible? check it
%description groceries,expenses:groceries,
2020/01/12.*Plumbing LLC,expenses:house:upkeep,emergency plumbing call-out
end
This rule can be used inside if blocks (only), to make hledger stop
This rule can be used inside if blocks (only), to make hledger stop
reading this CSV file and move on to the next input file, or to command
execution. Eg:
@ -547,10 +602,10 @@ CSV RULES
date-format
date-format DATEFMT
This is a helper for the date (and date2) fields. If your CSV dates
are not formatted like YYYY-MM-DD, YYYY/MM/DD or YYYY.MM.DD, you'll
need to add a date-format rule describing them with a strptime date
parsing pattern, which must parse the CSV date value completely. Some
This is a helper for the date (and date2) fields. If your CSV dates
are not formatted like YYYY-MM-DD, YYYY/MM/DD or YYYY.MM.DD, you'll
need to add a date-format rule describing them with a strptime date
parsing pattern, which must parse the CSV date value completely. Some
examples:
# MM/DD/YY
@ -572,15 +627,15 @@ CSV RULES
mat.html#v:formatTime
newest-first
hledger always sorts the generated transactions by date. Transactions
on the same date should appear in the same order as their CSV records,
as hledger can usually auto-detect whether the CSV's normal order is
hledger always sorts the generated transactions by date. Transactions
on the same date should appear in the same order as their CSV records,
as hledger can usually auto-detect whether the CSV's normal order is
oldest first or newest first. But if all of the following are true:
o the CSV might sometimes contain just one day of data (all records
o the CSV might sometimes contain just one day of data (all records
having the same date)
o the CSV records are normally in reverse chronological order (newest
o the CSV records are normally in reverse chronological order (newest
at the top)
o and you care about preserving the order of same-day transactions
@ -593,9 +648,9 @@ CSV RULES
include
include RULESFILE
This includes the contents of another CSV rules file at this point.
RULESFILE is an absolute file path or a path relative to the current
file's directory. This can be useful for sharing common rules between
This includes the contents of another CSV rules file at this point.
RULESFILE is an absolute file path or a path relative to the current
file's directory. This can be useful for sharing common rules between
several rules files, eg:
# someaccount.csv.rules
@ -610,10 +665,10 @@ CSV RULES
balance-type
Balance assertions generated by assigning to balanceN are of the simple
= type by default, which is a single-commodity, subaccount-excluding
= type by default, which is a single-commodity, subaccount-excluding
assertion. You may find the subaccount-including variants more useful,
eg if you have created some virtual subaccounts of checking to help
with budgeting. You can select a different type of assertion with the
eg if you have created some virtual subaccounts of checking to help
with budgeting. You can select a different type of assertion with the
balance-type rule:
# balance assertions will consider all commodities and all subaccounts
@ -628,19 +683,19 @@ CSV RULES
TIPS
Rapid feedback
It's a good idea to get rapid feedback while creating/troubleshooting
It's a good idea to get rapid feedback while creating/troubleshooting
CSV rules. Here's a good way, using entr from http://eradman.com/entr-
project :
$ ls foo.csv* | entr bash -c 'echo ----; hledger -f foo.csv print desc:SOMEDESC'
A desc: query (eg) is used to select just one, or a few, transactions
of interest. "bash -c" is used to run multiple commands, so we can
echo a separator each time the command re-runs, making it easier to
A desc: query (eg) is used to select just one, or a few, transactions
of interest. "bash -c" is used to run multiple commands, so we can
echo a separator each time the command re-runs, making it easier to
read the output.
Valid CSV
hledger accepts CSV conforming to RFC 4180. When CSV values are en-
hledger accepts CSV conforming to RFC 4180. When CSV values are en-
closed in quotes, note:
o they must be double quotes (not single quotes)
@ -648,9 +703,9 @@ TIPS
o spaces outside the quotes are not allowed
File Extension
CSV ("Character Separated Values") files should be named with one of
these filename extensions: .csv, .ssv, .tsv. Or, the file path should
be prefixed with one of csv:, ssv:, tsv:. This helps hledger identify
CSV ("Character Separated Values") files should be named with one of
these filename extensions: .csv, .ssv, .tsv. Or, the file path should
be prefixed with one of csv:, ssv:, tsv:. This helps hledger identify
the format and show the right error messages. For example:
$ hledger -f foo.ssv print
@ -662,44 +717,44 @@ TIPS
More about this: Input files in the hledger manual.
Reading multiple CSV files
If you use multiple -f options to read multiple CSV files at once,
hledger will look for a correspondingly-named rules file for each CSV
file. But if you use the --rules-file option, that rules file will be
If you use multiple -f options to read multiple CSV files at once,
hledger will look for a correspondingly-named rules file for each CSV
file. But if you use the --rules-file option, that rules file will be
used for all the CSV files.
Valid transactions
After reading a CSV file, hledger post-processes and validates the gen-
erated journal entries as it would for a journal file - balancing them,
applying balance assignments, and canonicalising amount styles. Any
errors at this stage will be reported in the usual way, displaying the
applying balance assignments, and canonicalising amount styles. Any
errors at this stage will be reported in the usual way, displaying the
problem entry.
There is one exception: balance assertions, if you have generated them,
will not be checked, since normally these will work only when the CSV
data is part of the main journal. If you do need to check balance as-
will not be checked, since normally these will work only when the CSV
data is part of the main journal. If you do need to check balance as-
sertions generated from CSV right away, pipe into another hledger:
$ hledger -f file.csv print | hledger -f- print
Deduplicating, importing
When you download a CSV file periodically, eg to get your latest bank
transactions, the new file may overlap with the old one, containing
When you download a CSV file periodically, eg to get your latest bank
transactions, the new file may overlap with the old one, containing
some of the same records.
The import command will (a) detect the new transactions, and (b) append
just those transactions to your main journal. It is idempotent, so you
don't have to remember how many times you ran it or with which version
of the CSV. (It keeps state in a hidden .latest.FILE.csv file.) This
don't have to remember how many times you ran it or with which version
of the CSV. (It keeps state in a hidden .latest.FILE.csv file.) This
is the easiest way to import CSV data. Eg:
# download the latest CSV files, then run this command.
# Note, no -f flags needed here.
$ hledger import *.csv [--dry]
This method works for most CSV files. (Where records have a stable
This method works for most CSV files. (Where records have a stable
chronological order, and new records appear only at the new end.)
A number of other tools and workflows, hledger-specific and otherwise,
A number of other tools and workflows, hledger-specific and otherwise,
exist for converting, deduplicating, classifying and managing CSV data.
See:
@ -710,43 +765,43 @@ TIPS
Setting amounts
A posting amount can be set in one of these ways:
o by assigning (with a fields list or field assignment) to amountN
o by assigning (with a fields list or field assignment) to amountN
(posting N's amount) or amount (posting 1's amount)
o by assigning to amountN-in and amountN-out (or amount-in and amount-
out). For each CSV record, whichever of these has a non-zero value
will be used, with appropriate sign. If both contain a non-zero
o by assigning to amountN-in and amountN-out (or amount-in and amount-
out). For each CSV record, whichever of these has a non-zero value
will be used, with appropriate sign. If both contain a non-zero
value, this may not work.
o by assigning to balanceN (or balance) instead of the above, setting
the amount indirectly via a balance assignment. If you do this the
o by assigning to balanceN (or balance) instead of the above, setting
the amount indirectly via a balance assignment. If you do this the
default account name may be wrong, so you should set that explicitly.
There is some special handling for an amount's sign:
o If an amount value is parenthesised, it will be de-parenthesised and
o If an amount value is parenthesised, it will be de-parenthesised and
sign-flipped.
o If an amount value begins with a double minus sign, those cancel out
o If an amount value begins with a double minus sign, those cancel out
and are removed.
o If an amount value begins with a plus sign, that will be removed
Setting currency/commodity
If the currency/commodity symbol is included in the CSV's amount
If the currency/commodity symbol is included in the CSV's amount
field(s), you don't have to do anything special.
If the currency is provided as a separate CSV field, you can either:
o assign that to currency, which adds it to all posting amounts. The
symbol will prepended to the amount quantity (on the left side). If
you write a trailing space after the symbol, there will be a space
between symbol and amount (an exception to the usual whitespace
o assign that to currency, which adds it to all posting amounts. The
symbol will prepended to the amount quantity (on the left side). If
you write a trailing space after the symbol, there will be a space
between symbol and amount (an exception to the usual whitespace
stripping).
o or assign it to currencyN which adds it to posting N's amount only.
o or for more control, construct the amount from symbol and quantity
o or for more control, construct the amount from symbol and quantity
using field assignment, eg:
fields date,description,currency,quantity
@ -754,9 +809,9 @@ TIPS
amount %quantity %currency
Referencing other fields
In field assignments, you can interpolate only CSV fields, not hledger
fields. In the example below, there's both a CSV field and a hledger
field named amount1, but %amount1 always means the CSV field, not the
In field assignments, you can interpolate only CSV fields, not hledger
fields. In the example below, there's both a CSV field and a hledger
field named amount1, but %amount1 always means the CSV field, not the
hledger field:
# Name the third CSV field "amount1"
@ -768,7 +823,7 @@ TIPS
# Set comment to the CSV amount1 (not the amount1 assigned above)
comment %amount1
Here, since there's no CSV amount1 field, %amount1 will produce a lit-
Here, since there's no CSV amount1 field, %amount1 will produce a lit-
eral "amount1":
fields date,description,csvamount
@ -776,7 +831,7 @@ TIPS
# Can't interpolate amount1 here
comment %amount1
When there are multiple field assignments to the same hledger field,
When there are multiple field assignments to the same hledger field,
only the last one takes effect. Here, comment's value will be be B, or
C if "something" is matched, but never A:
@ -786,14 +841,14 @@ TIPS
comment C
How CSV rules are evaluated
Here's how to think of CSV rules being evaluated (if you really need
Here's how to think of CSV rules being evaluated (if you really need
to). First,
o include - all includes are inlined, from top to bottom, depth first.
(At each include point the file is inlined and scanned for further
o include - all includes are inlined, from top to bottom, depth first.
(At each include point the file is inlined and scanned for further
includes, recursively, before proceeding.)
Then "global" rules are evaluated, top to bottom. If a rule is re-
Then "global" rules are evaluated, top to bottom. If a rule is re-
peated, the last one wins:
o skip (at top level)
@ -807,30 +862,30 @@ TIPS
Then for each CSV record in turn:
o test all if blocks. If any of them contain a end rule, skip all re-
maining CSV records. Otherwise if any of them contain a skip rule,
skip that many CSV records. If there are multiple matched skip
o test all if blocks. If any of them contain a end rule, skip all re-
maining CSV records. Otherwise if any of them contain a skip rule,
skip that many CSV records. If there are multiple matched skip
rules, the first one wins.
o collect all field assignments at top level and in matched if blocks.
When there are multiple assignments for a field, keep only the last
o collect all field assignments at top level and in matched if blocks.
When there are multiple assignments for a field, keep only the last
one.
o compute a value for each hledger field - either the one that was as-
o compute a value for each hledger field - either the one that was as-
signed to it (and interpolate the %CSVFIELDNAME references), or a de-
fault
o generate a synthetic hledger transaction from these values.
This is all part of the CSV reader, one of several readers hledger can
use to parse input files. When all files have been read successfully,
the transactions are passed as input to whichever hledger command the
This is all part of the CSV reader, one of several readers hledger can
use to parse input files. When all files have been read successfully,
the transactions are passed as input to whichever hledger command the
user specified.
REPORTING BUGS
Report bugs at http://bugs.hledger.org (or on the #hledger IRC channel
Report bugs at http://bugs.hledger.org (or on the #hledger IRC channel
or hledger mail list)
@ -844,7 +899,7 @@ COPYRIGHT
SEE ALSO
hledger(1), hledger-ui(1), hledger-web(1), hledger-api(1),
hledger(1), hledger-ui(1), hledger-web(1), hledger-api(1),
hledger_csv(5), hledger_journal(5), hledger_timeclock(5), hledger_time-
dot(5), ledger(1)

View File

@ -72,7 +72,7 @@ reordered. See also the import command.
This command also supports the output destination and output format
options The output formats supported are txt, csv, and (experimental)
json.
json and sql.
Here's an example of print's CSV output:

View File

@ -1070,7 +1070,8 @@ $ hledger print -o - # write to stdout (the default)
Some commands (print, register, the balance commands) offer a choice of
output format.
In addition to the usual plain text format (\f[C]txt\f[R]), there are
CSV (\f[C]csv\f[R]), HTML (\f[C]html\f[R]) and JSON (\f[C]json\f[R]).
CSV (\f[C]csv\f[R]), HTML (\f[C]html\f[R]), JSON (\f[C]json\f[R]) and
SQL (\f[C]sql\f[R]).
This is controlled by the \f[C]-O/--output-format\f[R] option:
.IP
.nf
@ -1119,6 +1120,20 @@ your control.
We hope this approach will not cause problems in practice; if you find
otherwise, please let us know.
(Cf #1195)
.PP
Notes about SQL output:
.IP \[bu] 2
SQL output is also marked experimental, and much like JSON could use
real-world feedback.
.IP \[bu] 2
SQL output is expected to work with sqlite, MySQL and PostgreSQL
.IP \[bu] 2
SQL output is structured with the expectations that statements will be
executed in the empty database.
If you already have tables created via SQL output of hledger, you would
probably want to either clear tables of existing data (via
\f[C]delete\f[R] or \f[C]truncate\f[R] SQL statements) or drop tables
completely as otherwise your postings will be duped.
.SS Regular expressions
.PP
hledger uses regular expressions in a number of places:
@ -3763,7 +3778,7 @@ See also the import command.
.PP
This command also supports the output destination and output format
options The output formats supported are \f[C]txt\f[R], \f[C]csv\f[R],
and (experimental) \f[C]json\f[R].
and (experimental) \f[C]json\f[R] and \f[C]sql\f[R].
.PP
Here\[aq]s an example of print\[aq]s CSV output:
.IP

View File

@ -1007,8 +1007,8 @@ File: hledger.info, Node: Output format, Next: Regular expressions, Prev: Out
Some commands (print, register, the balance commands) offer a choice of
output format. In addition to the usual plain text format ('txt'),
there are CSV ('csv'), HTML ('html') and JSON ('json'). This is
controlled by the '-O/--output-format' option:
there are CSV ('csv'), HTML ('html'), JSON ('json') and SQL ('sql').
This is controlled by the '-O/--output-format' option:
$ hledger print -O csv
@ -1039,6 +1039,20 @@ $ hledger balancesheet -o foo.txt -O html # write HTML to foo.txt
your control. We hope this approach will not cause problems in
practice; if you find otherwise, please let us know. (Cf #1195)
Notes about SQL output:
* SQL output is also marked experimental, and much like JSON could
use real-world feedback.
* SQL output is expected to work with sqlite, MySQL and PostgreSQL
* SQL output is structured with the expectations that statements will
be executed in the empty database. If you already have tables
created via SQL output of hledger, you would probably want to
either clear tables of existing data (via 'delete' or 'truncate'
SQL statements) or drop tables completely as otherwise your
postings will be duped.

File: hledger.info, Node: Regular expressions, Next: Smart dates, Prev: Output format, Up: OPTIONS
@ -3192,7 +3206,7 @@ reordered. See also the import command.
This command also supports the output destination and output format
options The output formats supported are 'txt', 'csv', and
(experimental) 'json'.
(experimental) 'json' and 'sql'.
Here's an example of print's CSV output:
@ -3902,153 +3916,153 @@ Node: Output destination32183
Ref: #output-destination32335
Node: Output format32760
Ref: #output-format32910
Node: Regular expressions34492
Ref: #regular-expressions34649
Node: Smart dates36385
Ref: #smart-dates36536
Node: Report start & end date37897
Ref: #report-start-end-date38069
Node: Report intervals39566
Ref: #report-intervals39731
Node: Period expressions40121
Ref: #period-expressions40281
Node: Depth limiting44417
Ref: #depth-limiting44561
Node: Pivoting44893
Ref: #pivoting45016
Node: Valuation46692
Ref: #valuation46794
Node: -B Cost47483
Ref: #b-cost47587
Node: -V Value47720
Ref: #v-value47866
Node: -X Value in specified commodity48061
Ref: #x-value-in-specified-commodity48260
Node: Valuation date48409
Ref: #valuation-date48577
Node: Market prices48987
Ref: #market-prices49167
Node: --infer-value market prices from transactions49944
Ref: #infer-value-market-prices-from-transactions50193
Node: Valuation commodity51475
Ref: #valuation-commodity51684
Node: Simple valuation examples52910
Ref: #simple-valuation-examples53112
Node: --value Flexible valuation53771
Ref: #value-flexible-valuation53979
Node: More valuation examples55926
Ref: #more-valuation-examples56135
Node: Effect of valuation on reports58140
Ref: #effect-of-valuation-on-reports58328
Node: COMMANDS63849
Ref: #commands63957
Node: accounts65041
Ref: #accounts65139
Node: activity65838
Ref: #activity65948
Node: add66331
Ref: #add66430
Node: balance69169
Ref: #balance69280
Node: Classic balance report70738
Ref: #classic-balance-report70911
Node: Customising the classic balance report72280
Ref: #customising-the-classic-balance-report72508
Node: Colour support74584
Ref: #colour-support74751
Node: Flat mode74924
Ref: #flat-mode75072
Node: Depth limited balance reports75485
Ref: #depth-limited-balance-reports75670
Node: Percentages76126
Ref: #percentages76292
Node: Multicolumn balance report77429
Ref: #multicolumn-balance-report77609
Node: Budget report82871
Ref: #budget-report83014
Node: Nested budgets88280
Ref: #nested-budgets88392
Ref: #output-format-191873
Node: balancesheet92070
Ref: #balancesheet92206
Node: balancesheetequity93672
Ref: #balancesheetequity93821
Node: cashflow94544
Ref: #cashflow94672
Node: check-dates95851
Ref: #check-dates95978
Node: check-dupes96257
Ref: #check-dupes96381
Node: close96674
Ref: #close96788
Node: close usage98310
Ref: #close-usage98403
Node: commodities101216
Ref: #commodities101343
Node: descriptions101425
Ref: #descriptions101553
Node: diff101734
Ref: #diff101840
Node: files102887
Ref: #files102987
Node: help103134
Ref: #help103234
Node: import104315
Ref: #import104429
Node: Importing balance assignments105322
Ref: #importing-balance-assignments105470
Node: incomestatement106119
Ref: #incomestatement106252
Node: notes107739
Ref: #notes107852
Node: payees107978
Ref: #payees108084
Node: prices108242
Ref: #prices108348
Node: print108689
Ref: #print108799
Node: print-unique113585
Ref: #print-unique113711
Node: register113996
Ref: #register114123
Node: Custom register output118295
Ref: #custom-register-output118424
Node: register-match119761
Ref: #register-match119895
Node: rewrite120246
Ref: #rewrite120361
Node: Re-write rules in a file122216
Ref: #re-write-rules-in-a-file122350
Node: Diff output format123560
Ref: #diff-output-format123729
Node: rewrite vs print --auto124821
Ref: #rewrite-vs.-print---auto125000
Node: roi125556
Ref: #roi125654
Node: stats126666
Ref: #stats126765
Node: tags127553
Ref: #tags127651
Node: test127945
Ref: #test128053
Node: Add-on commands128800
Ref: #add-on-commands128917
Node: ui130260
Ref: #ui130348
Node: web130402
Ref: #web130505
Node: iadd130621
Ref: #iadd130732
Node: interest130814
Ref: #interest130921
Node: ENVIRONMENT131161
Ref: #environment131273
Node: FILES132102
Ref: #files-1132205
Node: LIMITATIONS132418
Ref: #limitations132537
Node: TROUBLESHOOTING133279
Ref: #troubleshooting133392
Node: Regular expressions35077
Ref: #regular-expressions35234
Node: Smart dates36970
Ref: #smart-dates37121
Node: Report start & end date38482
Ref: #report-start-end-date38654
Node: Report intervals40151
Ref: #report-intervals40316
Node: Period expressions40706
Ref: #period-expressions40866
Node: Depth limiting45002
Ref: #depth-limiting45146
Node: Pivoting45478
Ref: #pivoting45601
Node: Valuation47277
Ref: #valuation47379
Node: -B Cost48068
Ref: #b-cost48172
Node: -V Value48305
Ref: #v-value48451
Node: -X Value in specified commodity48646
Ref: #x-value-in-specified-commodity48845
Node: Valuation date48994
Ref: #valuation-date49162
Node: Market prices49572
Ref: #market-prices49752
Node: --infer-value market prices from transactions50529
Ref: #infer-value-market-prices-from-transactions50778
Node: Valuation commodity52060
Ref: #valuation-commodity52269
Node: Simple valuation examples53495
Ref: #simple-valuation-examples53697
Node: --value Flexible valuation54356
Ref: #value-flexible-valuation54564
Node: More valuation examples56511
Ref: #more-valuation-examples56720
Node: Effect of valuation on reports58725
Ref: #effect-of-valuation-on-reports58913
Node: COMMANDS64434
Ref: #commands64542
Node: accounts65626
Ref: #accounts65724
Node: activity66423
Ref: #activity66533
Node: add66916
Ref: #add67015
Node: balance69754
Ref: #balance69865
Node: Classic balance report71323
Ref: #classic-balance-report71496
Node: Customising the classic balance report72865
Ref: #customising-the-classic-balance-report73093
Node: Colour support75169
Ref: #colour-support75336
Node: Flat mode75509
Ref: #flat-mode75657
Node: Depth limited balance reports76070
Ref: #depth-limited-balance-reports76255
Node: Percentages76711
Ref: #percentages76877
Node: Multicolumn balance report78014
Ref: #multicolumn-balance-report78194
Node: Budget report83456
Ref: #budget-report83599
Node: Nested budgets88865
Ref: #nested-budgets88977
Ref: #output-format-192458
Node: balancesheet92655
Ref: #balancesheet92791
Node: balancesheetequity94257
Ref: #balancesheetequity94406
Node: cashflow95129
Ref: #cashflow95257
Node: check-dates96436
Ref: #check-dates96563
Node: check-dupes96842
Ref: #check-dupes96966
Node: close97259
Ref: #close97373
Node: close usage98895
Ref: #close-usage98988
Node: commodities101801
Ref: #commodities101928
Node: descriptions102010
Ref: #descriptions102138
Node: diff102319
Ref: #diff102425
Node: files103472
Ref: #files103572
Node: help103719
Ref: #help103819
Node: import104900
Ref: #import105014
Node: Importing balance assignments105907
Ref: #importing-balance-assignments106055
Node: incomestatement106704
Ref: #incomestatement106837
Node: notes108324
Ref: #notes108437
Node: payees108563
Ref: #payees108669
Node: prices108827
Ref: #prices108933
Node: print109274
Ref: #print109384
Node: print-unique114180
Ref: #print-unique114306
Node: register114591
Ref: #register114718
Node: Custom register output118890
Ref: #custom-register-output119019
Node: register-match120356
Ref: #register-match120490
Node: rewrite120841
Ref: #rewrite120956
Node: Re-write rules in a file122811
Ref: #re-write-rules-in-a-file122945
Node: Diff output format124155
Ref: #diff-output-format124324
Node: rewrite vs print --auto125416
Ref: #rewrite-vs.-print---auto125595
Node: roi126151
Ref: #roi126249
Node: stats127261
Ref: #stats127360
Node: tags128148
Ref: #tags128246
Node: test128540
Ref: #test128648
Node: Add-on commands129395
Ref: #add-on-commands129512
Node: ui130855
Ref: #ui130943
Node: web130997
Ref: #web131100
Node: iadd131216
Ref: #iadd131327
Node: interest131409
Ref: #interest131516
Node: ENVIRONMENT131756
Ref: #environment131868
Node: FILES132697
Ref: #files-1132800
Node: LIMITATIONS133013
Ref: #limitations133132
Node: TROUBLESHOOTING133874
Ref: #troubleshooting133987

End Tag Table

File diff suppressed because it is too large Load Diff