This commit is contained in:
Hieu Hoang 2016-08-23 10:40:26 +01:00
commit 2da5336d1d
3429 changed files with 984668 additions and 0 deletions

38
.beautify-ignore Normal file
View File

@ -0,0 +1,38 @@
# Files and directories that beautify.py should not clean up.
#
# This file is not as advanced as, say, .gitignore. It only supports files
# and directory paths relative to the project root, one per line, no globs,
# no quotes.
#
# Leading and trailing whitespace is stripped from filenames, but internal
# whitespace is preserved.
#
# Lines starting with a hash mark, such as this one, are comments. The hash
# mark must be the first character on the line. Blank lines are ignored.
#
# The .beautify-ignore file must be encoded in UTF-8.
boost
contrib
irstlm
jam-files
lm
mingw/MosesGUI/icons_rc.py
mingw/MosesGUI/Ui_credits.py
mingw/MosesGUI/Ui_mainWindow.py
moses/TranslationModel/UG
moses/server
moses/parameters
moses/thread_safe_container.h
phrase-extract/pcfg-common
phrase-extract/syntax-common
randlm
# Filename suffixes in here are language codes, so e.g. ".pl" means
# Polish, not Perl.
scripts/share/nonbreaking_prefixes
search
srilm
util
xmlrpc-c
.git
util/ug_cache_with_timeout.h

3
.gitignore vendored
View File

@ -85,3 +85,6 @@ mingw/MosesGUI/_eric4project/
contrib/m4m/merge-sorted
mert/hgdecode
.bash_history*
doxygen.conf
doxy
opt

9
.gitmodules vendored Normal file
View File

@ -0,0 +1,9 @@
[submodule "contrib/arrow-pipelines/python/pcl"]
path = contrib/arrow-pipelines/python/pcl
url = https://github.com/ianj-als/pcl.git
[submodule "contrib/omtc/omtc"]
path = contrib/omtc/omtc
url = https://github.com/ianj-als/omtc.git
[submodule "regtest"]
path = regtest
url = https://github.com/moses-smt/moses-regression-tests

10
BUILD-INSTRUCTIONS.txt Normal file
View File

@ -0,0 +1,10 @@
Instructions for building and installing Moses are online:
http://www.statmt.org/moses/?n=Development.GetStarted
Some of the code is not originally part of Moses, but is periodically copied
into the source tree from elsewhere:
* "bjam-files" is taken from Boost.
* "util" and "lm" are taken from KenLM: https://github.com/kpu/kenlm

460
COPYING Normal file
View File

@ -0,0 +1,460 @@
GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
[This is the first released version of the Lesser GPL. It also counts
as the successor of the GNU Library Public License, version 2, hence
the version number 2.1.]
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
Licenses are intended to guarantee your freedom to share and change
free software--to make sure the software is free for all its users.
This license, the Lesser General Public License, applies to some
specially designated software packages--typically libraries--of the
Free Software Foundation and other authors who decide to use it. You
can use it too, but we suggest you first think carefully about whether
this license or the ordinary General Public License is the better
strategy to use in any particular case, based on the explanations
below.
When we speak of free software, we are referring to freedom of use,
not price. Our General Public Licenses are designed to make sure that
you have the freedom to distribute copies of free software (and charge
for this service if you wish); that you receive source code or can get
it if you want it; that you can change the software and use pieces of
it in new free programs; and that you are informed that you can do
these things.
To protect your rights, we need to make restrictions that forbid
distributors to deny you these rights or to ask you to surrender these
rights. These restrictions translate to certain responsibilities for
you if you distribute copies of the library or if you modify it.
For example, if you distribute copies of the library, whether gratis
or for a fee, you must give the recipients all the rights that we gave
you. You must make sure that they, too, receive or can get the source
code. If you link other code with the library, you must provide
complete object files to the recipients, so that they can relink them
with the library after making changes to the library and recompiling
it. And you must show them these terms so they know their rights.
We protect your rights with a two-step method: (1) we copyright the
library, and (2) we offer you this license, which gives you legal
permission to copy, distribute and/or modify the library.
To protect each distributor, we want to make it very clear that
there is no warranty for the free library. Also, if the library is
modified by someone else and passed on, the recipients should know
that what they have is not the original version, so that the original
author's reputation will not be affected by problems that might be
introduced by others.
Finally, software patents pose a constant threat to the existence of
any free program. We wish to make sure that a company cannot
effectively restrict the users of a free program by obtaining a
restrictive license from a patent holder. Therefore, we insist that
any patent license obtained for a version of the library must be
consistent with the full freedom of use specified in this license.
Most GNU software, including some libraries, is covered by the
ordinary GNU General Public License. This license, the GNU Lesser
General Public License, applies to certain designated libraries, and
is quite different from the ordinary General Public License. We use
this license for certain libraries in order to permit linking those
libraries into non-free programs.
When a program is linked with a library, whether statically or using
a shared library, the combination of the two is legally speaking a
combined work, a derivative of the original library. The ordinary
General Public License therefore permits such linking only if the
entire combination fits its criteria of freedom. The Lesser General
Public License permits more lax criteria for linking other code with
the library.
We call this license the "Lesser" General Public License because it
does Less to protect the user's freedom than the ordinary General
Public License. It also provides other free software developers Less
of an advantage over competing non-free programs. These disadvantages
are the reason we use the ordinary General Public License for many
libraries. However, the Lesser license provides advantages in certain
special circumstances.
For example, on rare occasions, there may be a special need to
encourage the widest possible use of a certain library, so that it
becomes a de-facto standard. To achieve this, non-free programs must
be allowed to use the library. A more frequent case is that a free
library does the same job as widely used non-free libraries. In this
case, there is little to gain by limiting the free library to free
software only, so we use the Lesser General Public License.
In other cases, permission to use a particular library in non-free
programs enables a greater number of people to use a large body of
free software. For example, permission to use the GNU C Library in
non-free programs enables many more people to use the whole GNU
operating system, as well as its variant, the GNU/Linux operating
system.
Although the Lesser General Public License is Less protective of the
users' freedom, it does ensure that the user of a program that is
linked with the Library has the freedom and the wherewithal to run
that program using a modified version of the Library.
The precise terms and conditions for copying, distribution and
modification follow. Pay close attention to the difference between a
"work based on the library" and a "work that uses the library". The
former contains code derived from the library, whereas the latter must
be combined with the library in order to run.
GNU LESSER GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any software library or other
program which contains a notice placed by the copyright holder or
other authorized party saying it may be distributed under the terms of
this Lesser General Public License (also called "this License").
Each licensee is addressed as "you".
A "library" means a collection of software functions and/or data
prepared so as to be conveniently linked with application programs
(which use some of those functions and data) to form executables.
The "Library", below, refers to any such software library or work
which has been distributed under these terms. A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language. (Hereinafter, translation is
included without limitation in the term "modification".)
"Source code" for a work means the preferred form of the work for
making modifications to it. For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control
compilation and installation of the library.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does
and what the program that uses the Library does.
1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the
Library.
You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a
fee.
2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) The modified work must itself be a software library.
b) You must cause the files modified to carry prominent notices
stating that you changed the files and the date of any change.
c) You must cause the whole of the work to be licensed at no
charge to all third parties under the terms of this License.
d) If a facility in the modified Library refers to a function or a
table of data to be supplied by an application program that uses
the facility, other than as an argument passed when the facility
is invoked, then you must make a good faith effort to ensure that,
in the event an application does not supply such function or
table, the facility still operates, and performs whatever part of
its purpose remains meaningful.
(For example, a function in a library to compute square roots has
a purpose that is entirely well-defined independent of the
application. Therefore, Subsection 2d requires that any
application-supplied function or table used by this function must
be optional: if the application does not supply it, the square
root function must still compute square roots.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Library,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Library, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote
it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Library.
In addition, mere aggregation of another work not based on the Library
with the Library (or with a work based on the Library) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library. To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License. (If a newer version than version 2 of the
ordinary GNU General Public License has appeared, then you can specify
that version instead if you wish.) Do not make any other change in
these notices.
Once this change is made in a given copy, it is irreversible for
that copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.
This option is useful when you wish to copy part of the code of
the Library into a program that is not a library.
4. You may copy and distribute the Library (or a portion or
derivative of it, under Section 2) in object code or executable form
under the terms of Sections 1 and 2 above provided that you accompany
it with the complete corresponding machine-readable source code, which
must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.
If distribution of object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the
source code from the same place satisfies the requirement to
distribute the source code, even though third parties are not
compelled to copy the source along with the object code.
5. A program that contains no derivative of any portion of the
Library, but is designed to work with the Library by being compiled or
linked with it, is called a "work that uses the Library". Such a
work, in isolation, is not a derivative work of the Library, and
therefore falls outside the scope of this License.
However, linking a "work that uses the Library" with the Library
creates an executable that is a derivative of the Library (because it
contains portions of the Library), rather than a "work that uses the
library". The executable is therefore covered by this License.
Section 6 states terms for distribution of such executables.
When a "work that uses the Library" uses material from a header file
that is part of the Library, the object code for the work may be a
derivative work of the Library even though the source code is not.
Whether this is true is especially significant if the work can be
linked without the Library, or if the work is itself a library. The
threshold for this to be true is not precisely defined by law.
If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and small inline
functions (ten lines or less in length), then the use of the object
file is unrestricted, regardless of whether it is legally a derivative
work. (Executables containing this object code plus portions of the
Library will still fall under Section 6.)
Otherwise, if the work is a derivative of the Library, you may
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.
6. As an exception to the Sections above, you may also combine or
link a "work that uses the Library" with the Library to produce a
work containing portions of the Library, and distribute that work
under terms of your choice, provided that the terms permit
modification of the work for the customer's own use and reverse
engineering for debugging such modifications.
You must give prominent notice with each copy of the work that the
Library is used in it and that the Library and its use are covered by
this License. You must supply a copy of this License. If the work
during execution displays copyright notices, you must include the
copyright notice for the Library among them, as well as a reference
directing the user to the copy of this License. Also, you must do one
of these things:
a) Accompany the work with the complete corresponding
machine-readable source code for the Library including whatever
changes were used in the work (which must be distributed under
Sections 1 and 2 above); and, if the work is an executable linked
with the Library, with the complete machine-readable "work that
uses the Library", as object code and/or source code, so that the
user can modify the Library and then relink to produce a modified
executable containing the modified Library. (It is understood
that the user who changes the contents of definitions files in the
Library will not necessarily be able to recompile the application
to use the modified definitions.)
b) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (1) uses at run time a
copy of the library already present on the user's computer system,
rather than copying library functions into the executable, and (2)
will operate properly with a modified version of the library, if
the user installs one, as long as the modified version is
interface-compatible with the version that the work was made with.
c) Accompany the work with a written offer, valid for at least
three years, to give the same user the materials specified in
Subsection 6a, above, for a charge no more than the cost of
performing this distribution.
d) If distribution of the work is made by offering access to copy
from a designated place, offer equivalent access to copy the above
specified materials from the same place.
e) Verify that the user has already received a copy of these
materials or that you have already sent this user a copy.
For an executable, the required form of the "work that uses the
Library" must include any data and utility programs needed for
reproducing the executable from it. However, as a special exception,
the materials to be distributed need not include anything that is
normally distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on
which the executable runs, unless that component itself accompanies
the executable.
It may happen that this requirement contradicts the license
restrictions of other proprietary libraries that do not normally
accompany the operating system. Such a contradiction means you cannot
use both them and the Library together in an executable that you
distribute.
7. You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
library, provided that the separate distribution of the work based on
the Library and of the other library facilities is otherwise
permitted, and provided that you do these two things:
a) Accompany the combined library with a copy of the same work
based on the Library, uncombined with any other library
facilities. This must be distributed under the terms of the
Sections above.
b) Give prominent notice with the combined library of the fact
that part of it is a work based on the Library, and explaining
where to find the accompanying uncombined form of the same work.
8. You may not copy, modify, sublicense, link with, or distribute
the Library except as expressly provided under this License. Any
attempt otherwise to copy, modify, sublicense, link with, or
distribute the Library is void, and will automatically terminate your
rights under this License. However, parties who have received copies,
or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.
9. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Library or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Library (or any work based on the
Library), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Library or works based on it.
10. Each time you redistribute the Library (or any work based on the
Library), the recipient automatically receives a license from the
original licensor to copy, distribute, link with or modify the Library
subject to these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties with
this License.
11. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Library at all. For example, if a patent
license would not permit royalty-free redistribution of the Library by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Library.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply, and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
12. If the distribution and/or use of the Library is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Library under this License
may add an explicit geographical distribution limitation excluding those
countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
13. The Free Software Foundation may publish revised and/or new
versions of the Lesser General Public License from time to time.
Such new versions will be similar in spirit to the present version,
but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library
specifies a version number of this License which applies to it and
"any later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation. If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.
14. If you wish to incorporate parts of the Library into other free
programs whose distribution conditions are incompatible with these,
write to the author to ask for permission. For software which is
copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this. Our
decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.

343
Jamroot Normal file
View File

@ -0,0 +1,343 @@
#BUILDING MOSES
#PACKAGES
#Language models (optional):
#--with-irstlm=/path/to/irstlm
#--with-srilm=/path/to/srilm See moses/LM/Jamfile for more options.
#--with-maxent-srilm=true (requires a maxent-enabled version of SRILM to be specified via --with-srilm)
#--with-nplm=/path/to/nplm
#--with-randlm=/path/to/randlm
#KenLM is always compiled.
#
#--with-boost=/path/to/boost
#If Boost is in a non-standard location, specify it here. This directory is
#expected to contain include and lib or lib64.
#
#--with-xmlrpc-c=/path/to/xmlrpc-c for libxmlrpc-c (used by server)
#Note that, like language models, this is the --prefix where the library was
#installed, not some executable within the library.
#
#--no-xmlrpc-c
# Don't use xmlrpc-c library, even if it exists. Don't build moses server
#
#Compact phrase table and compact lexical reordering table
#--with-cmph=/path/to/cmph
#
#Thread-caching malloc (if present, used for multi-threaded builds by default)
#--without-tcmalloc does not compile with tcmalloc even if present
#--full-tcmalloc links against the full version (useful for memory profiling)
#
#REGRESSION TESTING
#--with-regtest=/path/to/moses-reg-test-data
#
#INSTALLATION
#--prefix=/path/to/prefix sets the install prefix [default is source root].
#--bindir=/path/to/prefix/bin sets the bin directory [PREFIX/bin]
#--libdir=/path/to/prefix/lib sets the lib directory [PREFIX/lib]
#--includedir=/path/to/prefix/include installs headers.
# Does not install if missing. No argument defaults to PREFIX/include .
#--install-scripts=/path/to/scripts copies scripts into a directory.
# Does not install if missing. No argument defaults to PREFIX/scripts .
#--git appends the git revision to the prefix directory.
#
#
#BUILD OPTIONS
# By default, the build is multi-threaded, optimized, and statically linked.
# Pass these to change the build:
#
# threading=single|multi controls threading (default multi)
#
# variant=release|debug|profile builds optimized (default), for debug, or for
# profiling
#
# link=static|shared controls preferred linking (default static)
# --static forces static linking (the default will fall
# back to shared)
#
# debug-symbols=on|off include or exclude (default) debugging
# information also known as -g
# --notrace compiles without TRACE macros
#
# --enable-boost-pool uses Boost pools for the memory SCFG tabgle
#
# --enable-mpi switch on mpi
# --without-libsegfault does not link with libSegFault
#
# --max-kenlm-order maximum ngram order that kenlm can process (default 6)
#
# --max-factors maximum number of factors (default 4)
#
# --unlabelled-source ignore source labels (redundant in hiero or string-to-tree system)
# for better performance
#CONTROLLING THE BUILD
#-a to build from scratch
#-j$NCPUS to compile in parallel
#--clean to clean
#--debug-build to build with Og. Only available with gcc 4.8+
import os ;
import option ;
import modules ;
import path ;
path-constant TOP : . ;
include $(TOP)/jam-files/sanity.jam ;
home = [ os.environ "HOME" ] ;
if [ path.exists $(home)/moses-environment.jam ]
{
# for those of use who don't like typing in command line bjam options all day long
include $(home)/moses-environment.jam ;
}
include $(TOP)/jam-files/check-environment.jam ; # get resource locations
# from environment variables
include $(TOP)/jam-files/xmlrpc-c.jam ; # xmlrpc-c stuff for the server
# include $(TOP)/jam-files/curlpp.jam ; # curlpp stuff for bias lookup (MMT only)
# exit "done" : 0 ;
max-order = [ option.get "max-kenlm-order" : 6 : 6 ] ;
if ! [ option.get "max-kenlm-order" ]
{
# some classes in Moses pull in header files from KenLM, so this needs to be
# defined here, not in moses/lm/Jamfile
option.set "max-kenlm-order" : 6 ;
requirements += <define>KENLM_MAX_ORDER=$(max-order) ;
}
# exit "all done" : 0 ;
boost 104400 ;
external-lib z ;
#lib dl : : <runtime-link>static:<link>static <runtime-link>shared:<link>shared ;
#requirements += <library>dl ;
#requirements += <cxxflags>-std=c++0x ;
# Allow moses to report the git commit hash of the version used for compilation
moses_githash = [ _shell "git describe --dirty" ] ;
requirements += <define>MOSES_VERSION_ID=\\\"$(moses_githash)\\\" ;
if ! [ option.get "without-tcmalloc" : : "yes" ] && [ test_library "tcmalloc_minimal" ] {
if [ option.get "full-tcmalloc" : : "yes" ] {
external-lib unwind ;
external-lib tcmalloc_and_profiler : : unwind ;
requirements += <library>tcmalloc_and_profiler <library>unwind <cflags>-fno-omit-frame-pointer <cxxflags>-fno-omit-frame-pointer ;
} else {
external-lib tcmalloc_minimal ;
requirements += <threading>multi:<library>tcmalloc_minimal ;
}
} else {
echo "Tip: install tcmalloc for faster threading. See BUILD-INSTRUCTIONS.txt for more information." ;
}
if [ option.get "filter-warnings" : : "yes" ] {
# given the low coding standards in Moses, we may want to filter out
# warnings about poor coding practice that no-one is ever going to fix
# anyway ...
requirements += <cxxflags>-Wno-deprecated ;
requirements += <cxxflags>-Wno-reorder ;
requirements += <cxxflags>-Wno-sign-compare ;
requirements += <cxxflags>-Wno-unused-but-set-variable ;
requirements += <cxxflags>-Wno-unused-result ;
requirements += <cxxflags>-Wno-unused-variable ;
requirements += <cxxflags>-Wno-comment ;
requirements += <cxxflags>-Wno-strict-aliasing ;
requirements += <cxxflags>-Wno-overloaded-virtual ;
}
if [ option.get "debug-build" : : "yes" ] {
requirements += <cxxflags>-Og ;
echo "Building with -Og to enable easier profiling and debugging. Only available on gcc 4.8+." ;
}
if [ option.get "with-address-sanitizer" : : "yes" ] {
requirements += <cxxflags>-fsanitize=address ;
requirements += <cxxflags>-fno-omit-frame-pointer ;
requirements += <linkflags>-fsanitize=address ;
echo "Building with AddressSanitizer to enable debugging of memory errors. Only available on gcc 4.8+." ;
}
if [ option.get "enable-mpi" : : "yes" ] {
import mpi ;
using mpi ;
external-lib boost_mpi ;
external-lib boost_serialization ;
requirements += <define>MPI_ENABLE ;
requirements += <library>mpi ;
requirements += <library>boost_mpi ;
requirements += <library>boost_serialization ;
}
mmt = [ option.get "mmt" ] ;
if $(mmt) {
requirements += <define>MMT ;
requirements += <include>$(mmt) ;
mmt_githash = [ _shell "cd $(mmt) && git describe --dirty" ] ;
requirements += <define>MMT_VERSION_ID=\\\"$(mmt_githash)\\\" ;
}
requirements += [ option.get "notrace" : <define>TRACE_ENABLE=1 ] ;
requirements += [ option.get "enable-boost-pool" : : <define>USE_BOOST_POOL ] ;
requirements += [ option.get "with-mm" : : <define>PT_UG ] ;
requirements += [ option.get "with-mm" : : <define>MAX_NUM_FACTORS=4 ] ;
requirements += [ option.get "unlabelled-source" : : <define>UNLABELLED_SOURCE ] ;
if [ option.get "with-oxlm" ] {
external-lib boost_serialization ;
external-lib gomp ;
requirements += <library>boost_serialization ;
requirements += <library>gomp ;
}
if [ option.get "with-cmph" : : "yes" ] {
requirements += <define>HAVE_CMPH ;
}
if [ option.get "with-icu" : : "yes" ]
{
external-lib icuuc ;
external-lib icuio ;
external-lib icui18n ;
requirements += <library>icuuc/<link>shared ;
requirements += <library>icuio/<link>shared ;
requirements += <library>icui18n/<link>shared ;
requirements += <cxxflags>-fPIC ;
requirements += <address-model>64 ;
# requirements += <runtime-link>shared ;
}
# for probing pt
external-lib boost_serialization ;
requirements += <library>boost_serialization/<runtime-link>static ;
if [ option.get "with-vw" ] {
requirements += <define>HAVE_VW ;
}
project : default-build
<threading>multi
<warnings>on
<debug-symbols>off
<variant>release
<link>static
;
#Apparently OS X likes to link against iconv for fgetsUTF8.
lib iconv ;
requirements += <os>MACOSX:<library>iconv ;
project : requirements
<threading>multi:<define>WITH_THREADS
<threading>multi:<library>boost_thread
<library>boost_system
<library>boost_program_options
<define>_FILE_OFFSET_BITS=64 <define>_LARGE_FILES
$(requirements)
<include>.
;
#Add directories here if you want their incidental targets too (i.e. tests).
build-projects lm util phrase-extract phrase-extract/syntax-common search moses moses/LM mert moses-cmd scripts regression-testing ;
# contrib/mira
if [ option.get "with-mm-extras" : : "yes" ]
{
alias mm-extras :
moses/TranslationModel/UG//bitext-find
moses/TranslationModel/UG//ptable-describe-features
moses/TranslationModel/UG//count-ptable-features
moses/TranslationModel/UG//ptable-sigtest-filter
moses/TranslationModel/UG//ptable-lookup
moses/TranslationModel/UG//ptable-lookup-corpus
moses/TranslationModel/UG//check-coverage
moses/TranslationModel/UG/mm//mtt-demo1
moses/TranslationModel/UG/mm//mtt-dump
moses/TranslationModel/UG/mm//mam2symal
moses/TranslationModel/UG/mm//mam_verify
moses/TranslationModel/UG/mm//mmlex-lookup
moses/TranslationModel/UG/mm//mtt-count-words
moses/TranslationModel/UG/mm//calc-coverage
moses/TranslationModel/UG//try-align
;
}
else
{
alias mm-extras ;
}
if [ option.get "with-mm" : : "yes" ]
{
alias mm :
moses/TranslationModel/UG/mm//mtt-build
moses/TranslationModel/UG/mm//symal2mam
moses/TranslationModel/UG/mm//mmlex-build
;
}
else
{
alias mm ;
}
if [ option.get "with-rephraser" : : "yes" ]
{
alias rephraser :
contrib/rephraser//paraphrase
;
}
else
{
alias rephraser ;
}
alias programs :
lm//programs
moses-cmd//programs
OnDiskPt//CreateOnDiskPt
OnDiskPt//queryOnDiskPt
mert//programs
misc//programs
symal
phrase-extract
phrase-extract//lexical-reordering
phrase-extract//extract-ghkm
phrase-extract//pcfg-extract
phrase-extract//pcfg-score
phrase-extract//extract-mixed-syntax
phrase-extract//score-stsg
phrase-extract//filter-rule-table
phrase-extract//postprocess-egret-forests
biconcor
# contrib/mira//mira
contrib/server//mosesserver
mm
mm-extras
rephraser
contrib/c++tokenizer//tokenizer
contrib/expected-bleu-training//train-expected-bleu
contrib/expected-bleu-training//prepare-expected-bleu-training
contrib/moses2//programs
;
install-bin-libs programs ;
install-headers headers-base : [ path.glob-tree biconcor contrib lm mert misc moses-cmd OnDiskPt phrase-extract symal util : *.hh *.h ] : . ;
install-headers headers-moses : moses//headers-to-install : moses ;
alias install : prefix-bin prefix-lib headers-base headers-moses ;
if ! [ option.get "includedir" : : $(prefix)/include ] {
explicit install headers-base headers-moses ;
}
if [ path.exists $(TOP)/dist ] && $(prefix) != dist {
echo "You have a $(TOP)/dist directory, but the build system now places files directly in the root i.e. $(TOP)/bin ." ;
echo "To disable this message, delete $(TOP)/dist ." ;
echo ;
}
#local temp = [ _shell "bash source ./s.sh" ] ;
local temp = [ _shell "mkdir -p $(TOP)/bin" ] ;
local temp = [ _shell "rm -f $(TOP)/bin/moses_chart" ] ;
local temp = [ _shell "cd $(TOP)/bin && ln -s moses moses_chart" ] ;

5
OnDiskPt/Jamfile Normal file
View File

@ -0,0 +1,5 @@
fakelib OnDiskPt : OnDiskWrapper.cpp SourcePhrase.cpp TargetPhrase.cpp Word.cpp Phrase.cpp PhraseNode.cpp TargetPhraseCollection.cpp Vocab.cpp OnDiskQuery.cpp ../moses//headers ;
exe CreateOnDiskPt : Main.cpp ..//boost_filesystem ../moses//moses OnDiskPt ;
exe queryOnDiskPt : queryOnDiskPt.cpp ..//boost_filesystem ../moses//moses OnDiskPt ;

273
OnDiskPt/Main.cpp Normal file
View File

@ -0,0 +1,273 @@
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <algorithm>
#include <iostream>
#include <string>
#include <vector>
#include <iterator>
#include <cassert>
#include "moses/InputFileStream.h"
#include "moses/Timer.h"
#include "moses/Util.h"
#include "OnDiskWrapper.h"
#include "SourcePhrase.h"
#include "TargetPhrase.h"
#include "TargetPhraseCollection.h"
#include "Word.h"
#include "Vocab.h"
#include "Main.h"
using namespace std;
using namespace OnDiskPt;
int main (int argc, char * const argv[])
{
// insert code here...
Moses::ResetUserTime();
Moses::PrintUserTime("Starting");
if (argc != 8) {
std::cerr << "Usage: " << argv[0] << " numSourceFactors numTargetFactors numScores tableLimit sortScoreIndex inputPath outputPath" << std::endl;
return 1;
}
int numSourceFactors = Moses::Scan<int>(argv[1])
, numTargetFactors = Moses::Scan<int>(argv[2])
, numScores = Moses::Scan<int>(argv[3])
, tableLimit = Moses::Scan<int>(argv[4]);
TargetPhraseCollection::s_sortScoreInd = Moses::Scan<int>(argv[5]);
assert(TargetPhraseCollection::s_sortScoreInd < numScores);
const string filePath = argv[6]
,destPath = argv[7];
Moses::InputFileStream inStream(filePath);
OnDiskWrapper onDiskWrapper;
onDiskWrapper.BeginSave(destPath, numSourceFactors, numTargetFactors, numScores);
PhraseNode &rootNode = onDiskWrapper.GetRootSourceNode();
size_t lineNum = 0;
string line;
while(getline(inStream, line)) {
lineNum++;
if (lineNum%1000 == 0) cerr << "." << flush;
if (lineNum%10000 == 0) cerr << ":" << flush;
if (lineNum%100000 == 0) cerr << lineNum << flush;
//cerr << lineNum << " " << line << endl;
std::vector<float> misc(1);
SourcePhrase sourcePhrase;
TargetPhrase *targetPhrase = new TargetPhrase(numScores);
OnDiskPt::PhrasePtr spShort = Tokenize(sourcePhrase, *targetPhrase, line, onDiskWrapper, numScores, misc);
assert(misc.size() == onDiskWrapper.GetNumCounts());
rootNode.AddTargetPhrase(sourcePhrase, targetPhrase, onDiskWrapper, tableLimit, misc, spShort);
}
rootNode.Save(onDiskWrapper, 0, tableLimit);
onDiskWrapper.EndSave();
Moses::PrintUserTime("Finished");
//pause();
return 0;
} // main()
bool Flush(const OnDiskPt::SourcePhrase *prevSourcePhrase, const OnDiskPt::SourcePhrase *currSourcePhrase)
{
if (prevSourcePhrase == NULL)
return false;
assert(currSourcePhrase);
bool ret = (*currSourcePhrase > *prevSourcePhrase);
//cerr << *prevSourcePhrase << endl << *currSourcePhrase << " " << ret << endl << endl;
return ret;
}
OnDiskPt::PhrasePtr Tokenize(SourcePhrase &sourcePhrase, TargetPhrase &targetPhrase, const std::string &lineStr, OnDiskWrapper &onDiskWrapper, int numScores, vector<float> &misc)
{
char line[lineStr.size() + 1];
strcpy(line, lineStr.c_str());
stringstream sparseFeatures, property;
size_t scoreInd = 0;
// MAIN LOOP
size_t stage = 0;
/* 0 = source phrase
1 = target phrase
2 = scores
3 = align
4 = count
7 = properties
*/
char *tok = strtok (line," ");
OnDiskPt::PhrasePtr out(new Phrase());
while (tok != NULL) {
if (0 == strcmp(tok, "|||")) {
++stage;
} else {
switch (stage) {
case 0: {
WordPtr w = Tokenize(sourcePhrase, tok, true, true, onDiskWrapper, 1);
if (w != NULL)
out->AddWord(w);
break;
}
case 1: {
Tokenize(targetPhrase, tok, false, true, onDiskWrapper, 0);
break;
}
case 2: {
float score = Moses::Scan<float>(tok);
targetPhrase.SetScore(score, scoreInd);
++scoreInd;
break;
}
case 3: {
//targetPhrase.Create1AlignFromString(tok);
targetPhrase.CreateAlignFromString(tok);
break;
}
case 4: {
// store only the 3rd one (rule count)
float val = Moses::Scan<float>(tok);
misc[0] = val;
break;
}
case 5: {
// sparse features
sparseFeatures << tok << " ";
break;
}
case 6: {
property << tok << " ";
break;
}
default:
cerr << "ERROR in line " << line << endl;
assert(false);
break;
}
}
tok = strtok (NULL, " ");
} // while (tok != NULL)
assert(scoreInd == numScores);
targetPhrase.SetSparseFeatures(Moses::Trim(sparseFeatures.str()));
targetPhrase.SetProperty(Moses::Trim(property.str()));
targetPhrase.SortAlign();
return out;
} // Tokenize()
OnDiskPt::WordPtr Tokenize(OnDiskPt::Phrase &phrase
, const std::string &token, bool addSourceNonTerm, bool addTargetNonTerm
, OnDiskPt::OnDiskWrapper &onDiskWrapper, int retSourceTarget)
{
// retSourceTarget: 0 = don't return anything. 1 = source, 2 = target
bool nonTerm = false;
size_t tokSize = token.size();
int comStr =token.compare(0, 1, "[");
if (comStr == 0) {
comStr = token.compare(tokSize - 1, 1, "]");
nonTerm = comStr == 0;
}
OnDiskPt::WordPtr out;
if (nonTerm) {
// non-term
size_t splitPos = token.find_first_of("[", 2);
string wordStr = token.substr(0, splitPos);
if (splitPos == string::npos) {
// lhs - only 1 word
WordPtr word(new Word());
word->CreateFromString(wordStr, onDiskWrapper.GetVocab());
phrase.AddWord(word);
} else {
// source & target non-terms
if (addSourceNonTerm) {
WordPtr word(new Word());
word->CreateFromString(wordStr, onDiskWrapper.GetVocab());
phrase.AddWord(word);
if (retSourceTarget == 1) {
out = word;
}
}
wordStr = token.substr(splitPos, tokSize - splitPos);
if (addTargetNonTerm) {
WordPtr word(new Word());
word->CreateFromString(wordStr, onDiskWrapper.GetVocab());
phrase.AddWord(word);
if (retSourceTarget == 2) {
out = word;
}
}
}
} else {
// term
WordPtr word(new Word());
word->CreateFromString(token, onDiskWrapper.GetVocab());
phrase.AddWord(word);
out = word;
}
return out;
}
void InsertTargetNonTerminals(std::vector<std::string> &sourceToks, const std::vector<std::string> &targetToks, const ::AlignType &alignments)
{
for (int ind = alignments.size() - 1; ind >= 0; --ind) {
const ::AlignPair &alignPair = alignments[ind];
size_t sourcePos = alignPair.first
,targetPos = alignPair.second;
const string &target = targetToks[targetPos];
sourceToks.insert(sourceToks.begin() + sourcePos + 1, target);
}
}
class AlignOrderer
{
public:
bool operator()(const ::AlignPair &a, const ::AlignPair &b) const {
return a.first < b.first;
}
};
void SortAlign(::AlignType &alignments)
{
std::sort(alignments.begin(), alignments.end(), AlignOrderer());
}

39
OnDiskPt/Main.h Normal file
View File

@ -0,0 +1,39 @@
#pragma once
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <string>
#include "SourcePhrase.h"
#include "TargetPhrase.h"
typedef std::pair<size_t, size_t> AlignPair;
typedef std::vector<AlignPair> AlignType;
OnDiskPt::WordPtr Tokenize(OnDiskPt::Phrase &phrase
, const std::string &token, bool addSourceNonTerm, bool addTargetNonTerm
, OnDiskPt::OnDiskWrapper &onDiskWrapper, int retSourceTarget);
OnDiskPt::PhrasePtr Tokenize(OnDiskPt::SourcePhrase &sourcePhrase, OnDiskPt::TargetPhrase &targetPhrase
, const std::string &lineStr, OnDiskPt::OnDiskWrapper &onDiskWrapper
, int numScores
, std::vector<float> &misc);
void InsertTargetNonTerminals(std::vector<std::string> &sourceToks, const std::vector<std::string> &targetToks, const AlignType &alignments);
void SortAlign(AlignType &alignments);
bool Flush(const OnDiskPt::SourcePhrase *prevSource, const OnDiskPt::SourcePhrase *currSource);

83
OnDiskPt/OnDiskQuery.cpp Normal file
View File

@ -0,0 +1,83 @@
#include "OnDiskQuery.h"
namespace OnDiskPt
{
void OnDiskQuery::Tokenize(Phrase &phrase,
const std::string &token,
bool addSourceNonTerm,
bool addTargetNonTerm)
{
bool nonTerm = false;
size_t tokSize = token.size();
int comStr =token.compare(0, 1, "[");
if (comStr == 0) {
comStr = token.compare(tokSize - 1, 1, "]");
nonTerm = comStr == 0;
}
if (nonTerm) {
// non-term
size_t splitPos = token.find_first_of("[", 2);
std::string wordStr = token.substr(0, splitPos);
if (splitPos == std::string::npos) {
// lhs - only 1 word
WordPtr word (new Word());
word->CreateFromString(wordStr, m_wrapper.GetVocab());
phrase.AddWord(word);
} else {
// source & target non-terms
if (addSourceNonTerm) {
WordPtr word( new Word());
word->CreateFromString(wordStr, m_wrapper.GetVocab());
phrase.AddWord(word);
}
wordStr = token.substr(splitPos, tokSize - splitPos);
if (addTargetNonTerm) {
WordPtr word(new Word());
word->CreateFromString(wordStr, m_wrapper.GetVocab());
phrase.AddWord(word);
}
}
} else {
// term
WordPtr word(new Word());
word->CreateFromString(token, m_wrapper.GetVocab());
phrase.AddWord(word);
}
}
SourcePhrase OnDiskQuery::Tokenize(const std::vector<std::string>& tokens)
{
SourcePhrase sourcePhrase;
if (tokens.size() > 0) {
std::vector<std::string>::const_iterator token = tokens.begin();
for (; token + 1 != tokens.end(); ++token) {
Tokenize(sourcePhrase, *token, true, true);
}
// last position. LHS non-term
Tokenize(sourcePhrase, *token, false, true);
}
return sourcePhrase;
}
const PhraseNode* OnDiskQuery::Query(const SourcePhrase& sourcePhrase)
{
const PhraseNode *node = &m_wrapper.GetRootSourceNode();
assert(node);
for (size_t pos = 0; pos < sourcePhrase.GetSize(); ++pos) {
const Word &word = sourcePhrase.GetWord(pos);
node = node->GetChild(word, m_wrapper);
if (node == NULL) {
break;
}
}
return node;
}
}

39
OnDiskPt/OnDiskQuery.h Normal file
View File

@ -0,0 +1,39 @@
#pragma once
#include <string>
#include <vector>
#include "OnDiskWrapper.h"
#include "Phrase.h"
#include "SourcePhrase.h"
#include "Word.h"
#include "PhraseNode.h"
namespace OnDiskPt
{
class OnDiskQuery
{
private:
OnDiskWrapper &m_wrapper;
public:
OnDiskQuery(OnDiskWrapper &wrapper):m_wrapper(wrapper) {}
void Tokenize(Phrase &phrase,
const std::string &token,
bool addSourceNonTerm,
bool addTargetNonTerm);
SourcePhrase Tokenize(const std::vector<std::string>& tokens);
const PhraseNode *Query(const SourcePhrase& sourcePhrase);
inline const PhraseNode *Query(const std::vector<std::string>& tokens) {
return Query(Tokenize(tokens));
}
};
}

223
OnDiskPt/OnDiskWrapper.cpp Normal file
View File

@ -0,0 +1,223 @@
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#ifdef WIN32
#include <direct.h>
#endif
#include <sys/stat.h>
#include <string>
#include "OnDiskWrapper.h"
#include "moses/Util.h"
#include "util/exception.hh"
#include "util/string_stream.hh"
using namespace std;
namespace OnDiskPt
{
int OnDiskWrapper::VERSION_NUM = 7;
OnDiskWrapper::OnDiskWrapper()
{
}
OnDiskWrapper::~OnDiskWrapper()
{
delete m_rootSourceNode;
}
void OnDiskWrapper::BeginLoad(const std::string &filePath)
{
if (!OpenForLoad(filePath)) {
UTIL_THROW(util::FileOpenException, "Couldn't open for loading: " << filePath);
}
if (!m_vocab.Load(*this))
UTIL_THROW(util::FileOpenException, "Couldn't load vocab");
uint64_t rootFilePos = GetMisc("RootNodeOffset");
m_rootSourceNode = new PhraseNode(rootFilePos, *this);
}
bool OnDiskWrapper::OpenForLoad(const std::string &filePath)
{
m_fileSource.open((filePath + "/Source.dat").c_str(), ios::in | ios::binary);
UTIL_THROW_IF(!m_fileSource.is_open(),
util::FileOpenException,
"Couldn't open file " << filePath << "/Source.dat");
m_fileTargetInd.open((filePath + "/TargetInd.dat").c_str(), ios::in | ios::binary);
UTIL_THROW_IF(!m_fileTargetInd.is_open(),
util::FileOpenException,
"Couldn't open file " << filePath << "/TargetInd.dat");
m_fileTargetColl.open((filePath + "/TargetColl.dat").c_str(), ios::in | ios::binary);
UTIL_THROW_IF(!m_fileTargetColl.is_open(),
util::FileOpenException,
"Couldn't open file " << filePath << "/TargetColl.dat");
m_fileVocab.open((filePath + "/Vocab.dat").c_str(), ios::in);
UTIL_THROW_IF(!m_fileVocab.is_open(),
util::FileOpenException,
"Couldn't open file " << filePath << "/Vocab.dat");
m_fileMisc.open((filePath + "/Misc.dat").c_str(), ios::in);
UTIL_THROW_IF(!m_fileMisc.is_open(),
util::FileOpenException,
"Couldn't open file " << filePath << "/Misc.dat");
// set up root node
LoadMisc();
m_numSourceFactors = GetMisc("NumSourceFactors");
m_numTargetFactors = GetMisc("NumTargetFactors");
m_numScores = GetMisc("NumScores");
return true;
}
bool OnDiskWrapper::LoadMisc()
{
char line[100000];
while(m_fileMisc.getline(line, 100000)) {
vector<string> tokens;
Moses::Tokenize(tokens, line);
UTIL_THROW_IF2(tokens.size() != 2, "Except key value. Found " << line);
const string &key = tokens[0];
m_miscInfo[key] = Moses::Scan<uint64_t>(tokens[1]);
}
return true;
}
void OnDiskWrapper::BeginSave(const std::string &filePath
, int numSourceFactors, int numTargetFactors, int numScores)
{
m_numSourceFactors = numSourceFactors;
m_numTargetFactors = numTargetFactors;
m_numScores = numScores;
m_filePath = filePath;
#ifdef WIN32
mkdir(filePath.c_str());
#else
mkdir(filePath.c_str(), 0777);
#endif
m_fileSource.open((filePath + "/Source.dat").c_str(), ios::out | ios::in | ios::binary | ios::ate | ios::trunc);
UTIL_THROW_IF(!m_fileSource.is_open(),
util::FileOpenException,
"Couldn't open file " << filePath << "/Source.dat");
m_fileTargetInd.open((filePath + "/TargetInd.dat").c_str(), ios::out | ios::binary | ios::ate | ios::trunc);
UTIL_THROW_IF(!m_fileTargetInd.is_open(),
util::FileOpenException,
"Couldn't open file " << filePath << "/TargetInd.dat");
m_fileTargetColl.open((filePath + "/TargetColl.dat").c_str(), ios::out | ios::binary | ios::ate | ios::trunc);
UTIL_THROW_IF(!m_fileTargetColl.is_open(),
util::FileOpenException,
"Couldn't open file " << filePath << "/TargetColl.dat");
m_fileVocab.open((filePath + "/Vocab.dat").c_str(), ios::out | ios::ate | ios::trunc);
UTIL_THROW_IF(!m_fileVocab.is_open(),
util::FileOpenException,
"Couldn't open file " << filePath << "/Vocab.dat");
m_fileMisc.open((filePath + "/Misc.dat").c_str(), ios::out | ios::ate | ios::trunc);
UTIL_THROW_IF(!m_fileMisc.is_open(),
util::FileOpenException,
"Couldn't open file " << filePath << "/Misc.dat");
// offset by 1. 0 offset is reserved
char c = 0xff;
m_fileSource.write(&c, 1);
UTIL_THROW_IF2(1 != m_fileSource.tellp(),
"Couldn't write to stream m_fileSource");
m_fileTargetInd.write(&c, 1);
UTIL_THROW_IF2(1 != m_fileTargetInd.tellp(),
"Couldn't write to stream m_fileTargetInd");
m_fileTargetColl.write(&c, 1);
UTIL_THROW_IF2(1 != m_fileTargetColl.tellp(),
"Couldn't write to stream m_fileTargetColl");
// set up root node
UTIL_THROW_IF2(GetNumCounts() != 1,
"Not sure what this is...");
vector<float> counts(GetNumCounts());
counts[0] = DEFAULT_COUNT;
m_rootSourceNode = new PhraseNode();
m_rootSourceNode->AddCounts(counts);
}
void OnDiskWrapper::EndSave()
{
bool ret = m_rootSourceNode->Saved();
UTIL_THROW_IF2(!ret, "Root node not saved");
GetVocab().Save(*this);
SaveMisc();
m_fileMisc.close();
m_fileVocab.close();
m_fileSource.close();
m_fileTarget.close();
m_fileTargetInd.close();
m_fileTargetColl.close();
}
void OnDiskWrapper::SaveMisc()
{
m_fileMisc << "Version " << VERSION_NUM << endl;
m_fileMisc << "NumSourceFactors " << m_numSourceFactors << endl;
m_fileMisc << "NumTargetFactors " << m_numTargetFactors << endl;
m_fileMisc << "NumScores " << m_numScores << endl;
m_fileMisc << "RootNodeOffset " << m_rootSourceNode->GetFilePos() << endl;
}
size_t OnDiskWrapper::GetSourceWordSize() const
{
return sizeof(uint64_t) + sizeof(char);
}
size_t OnDiskWrapper::GetTargetWordSize() const
{
return sizeof(uint64_t) + sizeof(char);
}
uint64_t OnDiskWrapper::GetMisc(const std::string &key) const
{
std::map<std::string, uint64_t>::const_iterator iter;
iter = m_miscInfo.find(key);
UTIL_THROW_IF2(iter == m_miscInfo.end()
, "Couldn't find value for key " << key
);
return iter->second;
}
}

111
OnDiskPt/OnDiskWrapper.h Normal file
View File

@ -0,0 +1,111 @@
#pragma once
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <string>
#include <fstream>
#include "Vocab.h"
#include "PhraseNode.h"
namespace OnDiskPt
{
const float DEFAULT_COUNT = 66666;
/** Global class with misc information need to create and use the on-disk rule table.
* 1 object of this class should be instantiated per rule table.
* Currently only hierarchical/syntax models use this, but can & should be used with pb models too
*/
class OnDiskWrapper
{
protected:
Vocab m_vocab;
std::string m_filePath;
int m_numSourceFactors, m_numTargetFactors, m_numScores;
std::fstream m_fileMisc, m_fileVocab, m_fileSource, m_fileTarget, m_fileTargetInd, m_fileTargetColl;
size_t m_defaultNodeSize;
PhraseNode *m_rootSourceNode;
std::map<std::string, uint64_t> m_miscInfo;
void SaveMisc();
bool OpenForLoad(const std::string &filePath);
bool LoadMisc();
public:
static int VERSION_NUM;
OnDiskWrapper();
~OnDiskWrapper();
void BeginLoad(const std::string &filePath);
void BeginSave(const std::string &filePath
, int numSourceFactors, int numTargetFactors, int numScores);
void EndSave();
Vocab &GetVocab() {
return m_vocab;
}
const Vocab &GetVocab() const {
return m_vocab;
}
size_t GetSourceWordSize() const;
size_t GetTargetWordSize() const;
std::fstream &GetFileSource() {
return m_fileSource;
}
std::fstream &GetFileTargetInd() {
return m_fileTargetInd;
}
std::fstream &GetFileTargetColl() {
return m_fileTargetColl;
}
std::fstream &GetFileVocab() {
return m_fileVocab;
}
size_t GetNumSourceFactors() const {
return m_numSourceFactors;
}
size_t GetNumTargetFactors() const {
return m_numTargetFactors;
}
size_t GetNumScores() const {
return m_numScores;
}
size_t GetNumCounts() const {
return 1;
}
PhraseNode &GetRootSourceNode() {
return *m_rootSourceNode;
}
const PhraseNode &GetRootSourceNode() const {
return *m_rootSourceNode;
}
uint64_t GetMisc(const std::string &key) const;
};
}

108
OnDiskPt/Phrase.cpp Normal file
View File

@ -0,0 +1,108 @@
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <iostream>
#include "moses/Util.h"
#include "Phrase.h"
#include "util/exception.hh"
using namespace std;
namespace OnDiskPt
{
void Phrase::AddWord(WordPtr word)
{
m_words.push_back(word);
}
void Phrase::AddWord(WordPtr word, size_t pos)
{
UTIL_THROW_IF2(!(pos < m_words.size()),
"Trying to get word " << pos << " when phrase size is " << m_words.size());
m_words.insert(m_words.begin() + pos + 1, word);
}
int Phrase::Compare(const Phrase &compare) const
{
int ret = 0;
for (size_t pos = 0; pos < GetSize(); ++pos) {
if (pos >= compare.GetSize()) {
// we're bigger than the other. Put 1st
ret = -1;
break;
}
const Word &thisWord = GetWord(pos)
,&compareWord = compare.GetWord(pos);
int wordRet = thisWord.Compare(compareWord);
if (wordRet != 0) {
ret = wordRet;
break;
}
}
if (ret == 0) {
assert(compare.GetSize() >= GetSize());
ret = (compare.GetSize() > GetSize()) ? 1 : 0;
}
return ret;
}
//! transitive comparison
bool Phrase::operator<(const Phrase &compare) const
{
int ret = Compare(compare);
return ret < 0;
}
bool Phrase::operator>(const Phrase &compare) const
{
int ret = Compare(compare);
return ret > 0;
}
bool Phrase::operator==(const Phrase &compare) const
{
int ret = Compare(compare);
return ret == 0;
}
void Phrase::DebugPrint(ostream &out, const Vocab &vocab) const
{
for (size_t pos = 0; pos < GetSize(); ++pos) {
const Word &word = GetWord(pos);
word.DebugPrint(out, vocab);
out << " ";
}
}
std::ostream& operator<<(std::ostream &out, const Phrase &phrase)
{
for (size_t pos = 0; pos < phrase.GetSize(); ++pos) {
const Word &word = phrase.GetWord(pos);
out << word << " ";
}
return out;
}
}

66
OnDiskPt/Phrase.h Normal file
View File

@ -0,0 +1,66 @@
#pragma once
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <vector>
#include <iostream>
#include <boost/shared_ptr.hpp>
#include "Word.h"
namespace OnDiskPt
{
class Vocab;
/** A contiguous phrase. SourcePhrase & TargetPhrase inherit from this and add the on-disk functionality
*/
class Phrase
{
friend std::ostream& operator<<(std::ostream&, const Phrase&);
protected:
std::vector<WordPtr> m_words;
public:
Phrase() {
}
virtual ~Phrase() {}
void AddWord(WordPtr word);
void AddWord(WordPtr word, size_t pos);
const Word &GetWord(size_t pos) const {
return *m_words[pos];
}
size_t GetSize() const {
return m_words.size();
}
virtual void DebugPrint(std::ostream &out, const Vocab &vocab) const;
int Compare(const Phrase &compare) const;
bool operator<(const Phrase &compare) const;
bool operator>(const Phrase &compare) const;
bool operator==(const Phrase &compare) const;
};
typedef boost::shared_ptr<Phrase> PhrasePtr;
}

268
OnDiskPt/PhraseNode.cpp Normal file
View File

@ -0,0 +1,268 @@
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include "PhraseNode.h"
#include "OnDiskWrapper.h"
#include "TargetPhraseCollection.h"
#include "SourcePhrase.h"
#include "moses/Util.h"
#include "util/exception.hh"
using namespace std;
namespace OnDiskPt
{
size_t PhraseNode::GetNodeSize(size_t numChildren, size_t wordSize, size_t countSize)
{
size_t ret = sizeof(uint64_t) * 2 // num children, value
+ (wordSize + sizeof(uint64_t)) * numChildren // word + ptr to next source node
+ sizeof(float) * countSize; // count info
return ret;
}
PhraseNode::PhraseNode()
: m_value(0)
,m_currChild(NULL)
,m_saved(false)
,m_memLoad(NULL)
{
}
PhraseNode::PhraseNode(uint64_t filePos, OnDiskWrapper &onDiskWrapper)
:m_counts(onDiskWrapper.GetNumCounts())
{
// load saved node
m_filePos = filePos;
size_t countSize = onDiskWrapper.GetNumCounts();
std::fstream &file = onDiskWrapper.GetFileSource();
file.seekg(filePos);
assert(filePos == (uint64_t)file.tellg());
file.read((char*) &m_numChildrenLoad, sizeof(uint64_t));
size_t memAlloc = GetNodeSize(m_numChildrenLoad, onDiskWrapper.GetSourceWordSize(), countSize);
m_memLoad = (char*) malloc(memAlloc);
// go to start of node again
file.seekg(filePos);
assert(filePos == (uint64_t)file.tellg());
// read everything into memory
file.read(m_memLoad, memAlloc);
assert(filePos + memAlloc == (uint64_t)file.tellg());
// get value
m_value = ((uint64_t*)m_memLoad)[1];
// get counts
float *memFloat = (float*) (m_memLoad + sizeof(uint64_t) * 2);
assert(countSize == 1);
m_counts[0] = memFloat[0];
m_memLoadLast = m_memLoad + memAlloc;
}
PhraseNode::~PhraseNode()
{
free(m_memLoad);
}
float PhraseNode::GetCount(size_t ind) const
{
return m_counts[ind];
}
void PhraseNode::Save(OnDiskWrapper &onDiskWrapper, size_t pos, size_t tableLimit)
{
UTIL_THROW_IF2(m_saved, "Already saved");
// save this node
m_targetPhraseColl.Sort(tableLimit);
m_targetPhraseColl.Save(onDiskWrapper);
m_value = m_targetPhraseColl.GetFilePos();
size_t numCounts = onDiskWrapper.GetNumCounts();
size_t memAlloc = GetNodeSize(GetSize(), onDiskWrapper.GetSourceWordSize(), numCounts);
char *mem = (char*) malloc(memAlloc);
//memset(mem, 0xfe, memAlloc);
size_t memUsed = 0;
uint64_t *memArray = (uint64_t*) mem;
memArray[0] = GetSize(); // num of children
memArray[1] = m_value; // file pos of corresponding target phrases
memUsed += 2 * sizeof(uint64_t);
// count info
float *memFloat = (float*) (mem + memUsed);
UTIL_THROW_IF2(numCounts != 1, "Can only store 1 phrase count");
memFloat[0] = (m_counts.size() == 0) ? DEFAULT_COUNT : m_counts[0]; // if count = 0, put in very large num to make sure its still used. HACK
memUsed += sizeof(float) * numCounts;
// recursively save chm_countsildren
ChildColl::iterator iter;
for (iter = m_children.begin(); iter != m_children.end(); ++iter) {
const Word &childWord = iter->first;
PhraseNode &childNode = iter->second;
// recursive
if (!childNode.Saved())
childNode.Save(onDiskWrapper, pos + 1, tableLimit);
char *currMem = mem + memUsed;
size_t wordMemUsed = childWord.WriteToMemory(currMem);
memUsed += wordMemUsed;
uint64_t *memArray = (uint64_t*) (mem + memUsed);
memArray[0] = childNode.GetFilePos();
memUsed += sizeof(uint64_t);
}
// save this node
//Moses::DebugMem(mem, memAlloc);
assert(memUsed == memAlloc);
std::fstream &file = onDiskWrapper.GetFileSource();
m_filePos = file.tellp();
file.seekp(0, ios::end);
file.write(mem, memUsed);
uint64_t endPos = file.tellp();
assert(m_filePos + memUsed == endPos);
free(mem);
m_children.clear();
m_saved = true;
}
void PhraseNode::AddTargetPhrase(const SourcePhrase &sourcePhrase, TargetPhrase *targetPhrase
, OnDiskWrapper &onDiskWrapper, size_t tableLimit
, const std::vector<float> &counts, OnDiskPt::PhrasePtr spShort)
{
AddTargetPhrase(0, sourcePhrase, targetPhrase, onDiskWrapper, tableLimit, counts, spShort);
}
void PhraseNode::AddTargetPhrase(size_t pos, const SourcePhrase &sourcePhrase
, TargetPhrase *targetPhrase, OnDiskWrapper &onDiskWrapper
, size_t tableLimit, const std::vector<float> &counts, OnDiskPt::PhrasePtr spShort)
{
size_t phraseSize = sourcePhrase.GetSize();
if (pos < phraseSize) {
const Word &word = sourcePhrase.GetWord(pos);
PhraseNode &node = m_children[word];
if (m_currChild != &node) {
// new node
node.SetPos(pos);
if (m_currChild) {
m_currChild->Save(onDiskWrapper, pos, tableLimit);
}
m_currChild = &node;
}
// keep searching for target phrase node..
node.AddTargetPhrase(pos + 1, sourcePhrase, targetPhrase, onDiskWrapper, tableLimit, counts, spShort);
} else {
// drilled down to the right node
m_counts = counts;
targetPhrase->SetSourcePhrase(spShort);
m_targetPhraseColl.AddTargetPhrase(targetPhrase);
}
}
const PhraseNode *PhraseNode::GetChild(const Word &wordSought, OnDiskWrapper &onDiskWrapper) const
{
const PhraseNode *ret = NULL;
int l = 0;
int r = m_numChildrenLoad - 1;
int x;
while (r >= l) {
x = (l + r) / 2;
Word wordFound;
uint64_t childFilePos;
GetChild(wordFound, childFilePos, x, onDiskWrapper);
if (wordSought == wordFound) {
ret = new PhraseNode(childFilePos, onDiskWrapper);
break;
}
if (wordSought < wordFound)
r = x - 1;
else
l = x + 1;
}
return ret;
}
void PhraseNode::GetChild(Word &wordFound, uint64_t &childFilePos, size_t ind, OnDiskWrapper &onDiskWrapper) const
{
size_t wordSize = onDiskWrapper.GetSourceWordSize();
size_t childSize = wordSize + sizeof(uint64_t);
char *currMem = m_memLoad
+ sizeof(uint64_t) * 2 // size & file pos of target phrase coll
+ sizeof(float) * onDiskWrapper.GetNumCounts() // count info
+ childSize * ind;
size_t memRead = ReadChild(wordFound, childFilePos, currMem);
assert(memRead == childSize);
}
size_t PhraseNode::ReadChild(Word &wordFound, uint64_t &childFilePos, const char *mem) const
{
size_t memRead = wordFound.ReadFromMemory(mem);
const char *currMem = mem + memRead;
uint64_t *memArray = (uint64_t*) (currMem);
childFilePos = memArray[0];
memRead += sizeof(uint64_t);
return memRead;
}
TargetPhraseCollection::shared_ptr
PhraseNode::
GetTargetPhraseCollection(size_t tableLimit, OnDiskWrapper &onDiskWrapper) const
{
TargetPhraseCollection::shared_ptr ret(new TargetPhraseCollection);
if (m_value > 0) ret->ReadFromFile(tableLimit, m_value, onDiskWrapper);
return ret;
}
std::ostream& operator<<(std::ostream &out, const PhraseNode &node)
{
out << "node (" << node.GetFilePos() << "," << node.GetValue() << "," << node.m_pos << ")";
return out;
}
}

108
OnDiskPt/PhraseNode.h Normal file
View File

@ -0,0 +1,108 @@
#pragma once
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <fstream>
#include <vector>
#include <map>
#include "Word.h"
#include "TargetPhraseCollection.h"
#include "Phrase.h"
namespace OnDiskPt
{
class OnDiskWrapper;
class SourcePhrase;
/** A node in the source tree trie */
class PhraseNode
{
friend std::ostream& operator<<(std::ostream&, const PhraseNode&);
protected:
uint64_t m_filePos, m_value;
typedef std::map<Word, PhraseNode> ChildColl;
ChildColl m_children;
PhraseNode *m_currChild;
bool m_saved;
size_t m_pos;
std::vector<float> m_counts;
TargetPhraseCollection m_targetPhraseColl;
char *m_memLoad, *m_memLoadLast;
uint64_t m_numChildrenLoad;
void AddTargetPhrase(size_t pos, const SourcePhrase &sourcePhrase
, TargetPhrase *targetPhrase, OnDiskWrapper &onDiskWrapper
, size_t tableLimit, const std::vector<float> &counts, OnDiskPt::PhrasePtr spShort);
size_t ReadChild(Word &wordFound, uint64_t &childFilePos, const char *mem) const;
void GetChild(Word &wordFound, uint64_t &childFilePos, size_t ind, OnDiskWrapper &onDiskWrapper) const;
public:
static size_t GetNodeSize(size_t numChildren, size_t wordSize, size_t countSize);
PhraseNode(); // unsaved node
PhraseNode(uint64_t filePos, OnDiskWrapper &onDiskWrapper); // load saved node
~PhraseNode();
void Add(const Word &word, uint64_t nextFilePos, size_t wordSize);
void Save(OnDiskWrapper &onDiskWrapper, size_t pos, size_t tableLimit);
void AddTargetPhrase(const SourcePhrase &sourcePhrase, TargetPhrase *targetPhrase
, OnDiskWrapper &onDiskWrapper, size_t tableLimit
, const std::vector<float> &counts, OnDiskPt::PhrasePtr spShort);
uint64_t GetFilePos() const {
return m_filePos;
}
uint64_t GetValue() const {
return m_value;
}
void SetValue(uint64_t value) {
m_value = value;
}
size_t GetSize() const {
return m_children.size();
}
bool Saved() const {
return m_saved;
}
void SetPos(size_t pos) {
m_pos = pos;
}
const PhraseNode *GetChild(const Word &wordSought, OnDiskWrapper &onDiskWrapper) const;
TargetPhraseCollection::shared_ptr
GetTargetPhraseCollection(size_t tableLimit,
OnDiskWrapper &onDiskWrapper) const;
void AddCounts(const std::vector<float> &counts) {
m_counts = counts;
}
float GetCount(size_t ind) const;
};
}

27
OnDiskPt/SourcePhrase.cpp Normal file
View File

@ -0,0 +1,27 @@
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include "SourcePhrase.h"
namespace OnDiskPt
{
}

38
OnDiskPt/SourcePhrase.h Normal file
View File

@ -0,0 +1,38 @@
#pragma once
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <vector>
#include "Phrase.h"
#include "Word.h"
namespace OnDiskPt
{
/** A source phrase. No extension of a norm Phrase class because source phrases are saved as tries.
*/
class SourcePhrase: public Phrase
{
protected:
public:
};
}

402
OnDiskPt/TargetPhrase.cpp Normal file
View File

@ -0,0 +1,402 @@
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <algorithm>
#include <iostream>
#include "moses/Util.h"
#include "TargetPhrase.h"
#include "OnDiskWrapper.h"
#include "util/exception.hh"
#include <boost/algorithm/string.hpp>
using namespace std;
namespace OnDiskPt
{
TargetPhrase::TargetPhrase(size_t numScores)
:m_scores(numScores)
{
}
TargetPhrase::TargetPhrase(const TargetPhrase &copy)
:Phrase(copy)
,m_scores(copy.m_scores)
{
}
TargetPhrase::~TargetPhrase()
{
}
void TargetPhrase::SetLHS(WordPtr lhs)
{
AddWord(lhs);
}
void TargetPhrase::Create1AlignFromString(const std::string &align1Str)
{
vector<size_t> alignPoints;
Moses::Tokenize<size_t>(alignPoints, align1Str, "-");
UTIL_THROW_IF2(alignPoints.size() != 2, "Incorrectly formatted word alignment: " << align1Str);
m_align.push_back(pair<size_t, size_t>(alignPoints[0], alignPoints[1]) );
}
void TargetPhrase::CreateAlignFromString(const std::string &alignStr)
{
vector<std::string> alignPairs;
boost::split(alignPairs, alignStr, boost::is_any_of("\t "));
for (size_t i = 0; i < alignPairs.size(); ++i) {
vector<size_t> alignPoints;
Moses::Tokenize<size_t>(alignPoints, alignPairs[i], "-");
m_align.push_back(pair<size_t, size_t>(alignPoints[0], alignPoints[1]) );
}
}
void TargetPhrase::SetScore(float score, size_t ind)
{
assert(ind < m_scores.size());
m_scores[ind] = score;
}
class AlignOrderer
{
public:
bool operator()(const AlignPair &a, const AlignPair &b) const {
return a.first < b.first;
}
};
void TargetPhrase::SortAlign()
{
std::sort(m_align.begin(), m_align.end(), AlignOrderer());
}
char *TargetPhrase::WriteToMemory(OnDiskWrapper &onDiskWrapper, size_t &memUsed) const
{
size_t phraseSize = GetSize();
size_t targetWordSize = onDiskWrapper.GetTargetWordSize();
const PhrasePtr sp = GetSourcePhrase();
size_t spSize = sp->GetSize();
size_t sourceWordSize = onDiskWrapper.GetSourceWordSize();
size_t memNeeded = sizeof(uint64_t) // num of words
+ targetWordSize * phraseSize // actual words. lhs as last words
+ sizeof(uint64_t) // num source words
+ sourceWordSize * spSize; // actual source words
memUsed = 0;
uint64_t *mem = (uint64_t*) malloc(memNeeded);
// write size
mem[0] = phraseSize;
memUsed += sizeof(uint64_t);
// write each word
for (size_t pos = 0; pos < phraseSize; ++pos) {
const Word &word = GetWord(pos);
char *currPtr = (char*)mem + memUsed;
memUsed += word.WriteToMemory((char*) currPtr);
}
// write size of source phrase and all source words
char *currPtr = (char*)mem + memUsed;
uint64_t *memTmp = (uint64_t*) currPtr;
memTmp[0] = spSize;
memUsed += sizeof(uint64_t);
for (size_t pos = 0; pos < spSize; ++pos) {
const Word &word = sp->GetWord(pos);
char *currPtr = (char*)mem + memUsed;
memUsed += word.WriteToMemory((char*) currPtr);
}
assert(memUsed == memNeeded);
return (char *) mem;
}
void TargetPhrase::Save(OnDiskWrapper &onDiskWrapper)
{
// save in target ind
size_t memUsed;
char *mem = WriteToMemory(onDiskWrapper, memUsed);
std::fstream &file = onDiskWrapper.GetFileTargetInd();
uint64_t startPos = file.tellp();
file.seekp(0, ios::end);
file.write(mem, memUsed);
#ifndef NDEBUG
uint64_t endPos = file.tellp();
assert(startPos + memUsed == endPos);
#endif
m_filePos = startPos;
free(mem);
}
char *TargetPhrase::WriteOtherInfoToMemory(OnDiskWrapper &onDiskWrapper, size_t &memUsed) const
{
// allocate mem
size_t numScores = onDiskWrapper.GetNumScores()
,numAlign = GetAlign().size();
size_t sparseFeatureSize = m_sparseFeatures.size();
size_t propSize = m_property.size();
size_t memNeeded = sizeof(uint64_t) // file pos (phrase id)
+ sizeof(uint64_t) + 2 * sizeof(uint64_t) * numAlign // align
+ sizeof(float) * numScores // scores
+ sizeof(uint64_t) + sparseFeatureSize // sparse features string
+ sizeof(uint64_t) + propSize; // property string
char *mem = (char*) malloc(memNeeded);
//memset(mem, 0, memNeeded);
memUsed = 0;
// phrase id
memcpy(mem, &m_filePos, sizeof(uint64_t));
memUsed += sizeof(uint64_t);
// align
size_t tmp = WriteAlignToMemory(mem + memUsed);
memUsed += tmp;
// scores
memUsed += WriteScoresToMemory(mem + memUsed);
// sparse features
memUsed += WriteStringToMemory(mem + memUsed, m_sparseFeatures);
// property string
memUsed += WriteStringToMemory(mem + memUsed, m_property);
//DebugMem(mem, memNeeded);
assert(memNeeded == memUsed);
return mem;
}
size_t TargetPhrase::WriteStringToMemory(char *mem, const std::string &str) const
{
size_t memUsed = 0;
uint64_t *memTmp = (uint64_t*) mem;
size_t strSize = str.size();
memTmp[0] = strSize;
memUsed += sizeof(uint64_t);
const char *charStr = str.c_str();
memcpy(mem + memUsed, charStr, strSize);
memUsed += strSize;
return memUsed;
}
size_t TargetPhrase::WriteAlignToMemory(char *mem) const
{
size_t memUsed = 0;
// num of alignments
uint64_t numAlign = m_align.size();
memcpy(mem, &numAlign, sizeof(numAlign));
memUsed += sizeof(numAlign);
// actual alignments
AlignType::const_iterator iter;
for (iter = m_align.begin(); iter != m_align.end(); ++iter) {
const AlignPair &alignPair = *iter;
memcpy(mem + memUsed, &alignPair.first, sizeof(alignPair.first));
memUsed += sizeof(alignPair.first);
memcpy(mem + memUsed, &alignPair.second, sizeof(alignPair.second));
memUsed += sizeof(alignPair.second);
}
return memUsed;
}
size_t TargetPhrase::WriteScoresToMemory(char *mem) const
{
float *scoreMem = (float*) mem;
for (size_t ind = 0; ind < m_scores.size(); ++ind)
scoreMem[ind] = m_scores[ind];
size_t memUsed = sizeof(float) * m_scores.size();
return memUsed;
}
uint64_t TargetPhrase::ReadOtherInfoFromFile(uint64_t filePos, std::fstream &fileTPColl)
{
assert(filePos == (uint64_t)fileTPColl.tellg());
uint64_t memUsed = 0;
fileTPColl.read((char*) &m_filePos, sizeof(uint64_t));
memUsed += sizeof(uint64_t);
assert(m_filePos != 0);
memUsed += ReadAlignFromFile(fileTPColl);
assert((memUsed + filePos) == (uint64_t)fileTPColl.tellg());
memUsed += ReadScoresFromFile(fileTPColl);
assert((memUsed + filePos) == (uint64_t)fileTPColl.tellg());
// sparse features
memUsed += ReadStringFromFile(fileTPColl, m_sparseFeatures);
// properties
memUsed += ReadStringFromFile(fileTPColl, m_property);
return memUsed;
}
uint64_t TargetPhrase::ReadStringFromFile(std::fstream &fileTPColl, std::string &outStr)
{
uint64_t bytesRead = 0;
uint64_t strSize;
fileTPColl.read((char*) &strSize, sizeof(uint64_t));
bytesRead += sizeof(uint64_t);
if (strSize) {
char *mem = (char*) malloc(strSize + 1);
mem[strSize] = '\0';
fileTPColl.read(mem, strSize);
outStr = string(mem);
free(mem);
bytesRead += strSize;
}
return bytesRead;
}
uint64_t TargetPhrase::ReadFromFile(std::fstream &fileTP)
{
uint64_t bytesRead = 0;
fileTP.seekg(m_filePos);
uint64_t numWords;
fileTP.read((char*) &numWords, sizeof(uint64_t));
bytesRead += sizeof(uint64_t);
for (size_t ind = 0; ind < numWords; ++ind) {
WordPtr word(new Word());
bytesRead += word->ReadFromFile(fileTP);
AddWord(word);
}
// read source words
uint64_t numSourceWords;
fileTP.read((char*) &numSourceWords, sizeof(uint64_t));
bytesRead += sizeof(uint64_t);
PhrasePtr sp(new SourcePhrase());
for (size_t ind = 0; ind < numSourceWords; ++ind) {
WordPtr word( new Word());
bytesRead += word->ReadFromFile(fileTP);
sp->AddWord(word);
}
SetSourcePhrase(sp);
return bytesRead;
}
uint64_t TargetPhrase::ReadAlignFromFile(std::fstream &fileTPColl)
{
uint64_t bytesRead = 0;
uint64_t numAlign;
fileTPColl.read((char*) &numAlign, sizeof(uint64_t));
bytesRead += sizeof(uint64_t);
for (size_t ind = 0; ind < numAlign; ++ind) {
AlignPair alignPair;
fileTPColl.read((char*) &alignPair.first, sizeof(uint64_t));
fileTPColl.read((char*) &alignPair.second, sizeof(uint64_t));
m_align.push_back(alignPair);
bytesRead += sizeof(uint64_t) * 2;
}
return bytesRead;
}
uint64_t TargetPhrase::ReadScoresFromFile(std::fstream &fileTPColl)
{
UTIL_THROW_IF2(m_scores.size() == 0, "Translation rules must must have some scores");
uint64_t bytesRead = 0;
for (size_t ind = 0; ind < m_scores.size(); ++ind) {
fileTPColl.read((char*) &m_scores[ind], sizeof(float));
bytesRead += sizeof(float);
}
std::transform(m_scores.begin(),m_scores.end(),m_scores.begin(), Moses::TransformScore);
std::transform(m_scores.begin(),m_scores.end(),m_scores.begin(), Moses::FloorScore);
return bytesRead;
}
void TargetPhrase::DebugPrint(ostream &out, const Vocab &vocab) const
{
Phrase::DebugPrint(out, vocab);
for (size_t ind = 0; ind < m_align.size(); ++ind) {
const AlignPair &alignPair = m_align[ind];
out << alignPair.first << "-" << alignPair.second << " ";
}
out << ", ";
for (size_t ind = 0; ind < m_scores.size(); ++ind) {
out << m_scores[ind] << " ";
}
return;
}
std::ostream& operator<<(std::ostream &out, const TargetPhrase &phrase)
{
out << (const Phrase&) phrase << ", " ;
for (size_t ind = 0; ind < phrase.m_align.size(); ++ind) {
const AlignPair &alignPair = phrase.m_align[ind];
out << alignPair.first << "-" << alignPair.second << " ";
}
out << ", ";
for (size_t ind = 0; ind < phrase.m_scores.size(); ++ind) {
out << phrase.m_scores[ind] << " ";
}
return out;
}
} // namespace

127
OnDiskPt/TargetPhrase.h Normal file
View File

@ -0,0 +1,127 @@
#pragma once
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <fstream>
#include <string>
#include <vector>
#include "Word.h"
#include "Phrase.h"
#include "SourcePhrase.h"
namespace Moses
{
class PhraseDictionary;
class TargetPhrase;
class Phrase;
}
namespace OnDiskPt
{
typedef std::pair<uint64_t, uint64_t> AlignPair;
typedef std::vector<AlignPair> AlignType;
class Vocab;
/** A target phrase, with the score breakdowns, alignment info and assorted other information it need.
* Readable and writeable to disk
*/
class TargetPhrase: public Phrase
{
friend std::ostream& operator<<(std::ostream&, const TargetPhrase&);
protected:
AlignType m_align;
PhrasePtr m_sourcePhrase;
std::string m_sparseFeatures, m_property;
std::vector<float> m_scores;
uint64_t m_filePos;
size_t WriteAlignToMemory(char *mem) const;
size_t WriteScoresToMemory(char *mem) const;
size_t WriteStringToMemory(char *mem, const std::string &str) const;
uint64_t ReadAlignFromFile(std::fstream &fileTPColl);
uint64_t ReadScoresFromFile(std::fstream &fileTPColl);
uint64_t ReadStringFromFile(std::fstream &fileTPColl, std::string &outStr);
public:
TargetPhrase() {
}
TargetPhrase(size_t numScores);
TargetPhrase(const TargetPhrase &copy);
virtual ~TargetPhrase();
void SetSourcePhrase(PhrasePtr p) {
m_sourcePhrase = p;
}
const PhrasePtr GetSourcePhrase() const {
return m_sourcePhrase;
}
const std::vector<float> &GetScores() const {
return m_scores;
}
void SetLHS(WordPtr lhs);
void Create1AlignFromString(const std::string &align1Str);
void CreateAlignFromString(const std::string &align1Str);
void SetScore(float score, size_t ind);
const AlignType &GetAlign() const {
return m_align;
}
void SortAlign();
char *WriteToMemory(OnDiskWrapper &onDiskWrapper, size_t &memUsed) const;
char *WriteOtherInfoToMemory(OnDiskWrapper &onDiskWrapper, size_t &memUsed) const;
void Save(OnDiskWrapper &onDiskWrapper);
uint64_t GetFilePos() const {
return m_filePos;
}
float GetScore(size_t ind) const {
return m_scores[ind];
}
uint64_t ReadOtherInfoFromFile(uint64_t filePos, std::fstream &fileTPColl);
uint64_t ReadFromFile(std::fstream &fileTP);
virtual void DebugPrint(std::ostream &out, const Vocab &vocab) const;
const std::string &GetProperty() const {
return m_property;
}
void SetProperty(const std::string &value) {
m_property = value;
}
const std::string &GetSparseFeatures() const {
return m_sparseFeatures;
}
void SetSparseFeatures(const std::string &value) {
m_sparseFeatures = value;
}
};
}

View File

@ -0,0 +1,171 @@
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <algorithm>
#include <iostream>
#include "moses/Util.h"
#include "TargetPhraseCollection.h"
#include "Vocab.h"
#include "OnDiskWrapper.h"
using namespace std;
namespace OnDiskPt
{
size_t TargetPhraseCollection::s_sortScoreInd;
TargetPhraseCollection::TargetPhraseCollection()
:m_filePos(777)
{}
TargetPhraseCollection::TargetPhraseCollection(const TargetPhraseCollection &copy)
:m_filePos(copy.m_filePos)
,m_debugStr(copy.m_debugStr)
{
}
TargetPhraseCollection::~TargetPhraseCollection()
{
Moses::RemoveAllInColl(m_coll);
}
void TargetPhraseCollection::AddTargetPhrase(TargetPhrase *targetPhrase)
{
m_coll.push_back(targetPhrase);
}
void TargetPhraseCollection::Sort(size_t tableLimit)
{
std::sort(m_coll.begin(), m_coll.end(), TargetPhraseOrderByScore());
if (tableLimit && m_coll.size() > tableLimit) {
CollType::iterator iter;
for (iter = m_coll.begin() + tableLimit ; iter != m_coll.end(); ++iter) {
delete *iter;
}
m_coll.resize(tableLimit);
}
}
void TargetPhraseCollection::Save(OnDiskWrapper &onDiskWrapper)
{
std::fstream &file = onDiskWrapper.GetFileTargetColl();
size_t memUsed = sizeof(uint64_t);
char *mem = (char*) malloc(memUsed);
// size of coll
uint64_t numPhrases = GetSize();
((uint64_t*)mem)[0] = numPhrases;
// MAIN LOOP
CollType::iterator iter;
for (iter = m_coll.begin(); iter != m_coll.end(); ++iter) {
// save phrase
TargetPhrase &targetPhrase = **iter;
targetPhrase.Save(onDiskWrapper);
// save coll
size_t memUsedTPOtherInfo;
char *memTPOtherInfo = targetPhrase.WriteOtherInfoToMemory(onDiskWrapper, memUsedTPOtherInfo);
// expand existing mem
mem = (char*) realloc(mem, memUsed + memUsedTPOtherInfo);
memcpy(mem + memUsed, memTPOtherInfo, memUsedTPOtherInfo);
memUsed += memUsedTPOtherInfo;
free(memTPOtherInfo);
}
// total number of bytes
//((uint64_t*)mem)[0] = (uint64_t) memUsed;
uint64_t startPos = file.tellp();
file.seekp(0, ios::end);
file.write((char*) mem, memUsed);
free(mem);
#ifndef NDEBUG
uint64_t endPos = file.tellp();
assert(startPos + memUsed == endPos);
#endif
m_filePos = startPos;
}
void TargetPhraseCollection::ReadFromFile(size_t tableLimit, uint64_t filePos, OnDiskWrapper &onDiskWrapper)
{
fstream &fileTPColl = onDiskWrapper.GetFileTargetColl();
fstream &fileTP = onDiskWrapper.GetFileTargetInd();
size_t numScores = onDiskWrapper.GetNumScores();
uint64_t numPhrases;
uint64_t currFilePos = filePos;
fileTPColl.seekg(filePos);
fileTPColl.read((char*) &numPhrases, sizeof(uint64_t));
// table limit
if (tableLimit) {
numPhrases = std::min(numPhrases, (uint64_t) tableLimit);
}
currFilePos += sizeof(uint64_t);
for (size_t ind = 0; ind < numPhrases; ++ind) {
TargetPhrase *tp = new TargetPhrase(numScores);
uint64_t sizeOtherInfo = tp->ReadOtherInfoFromFile(currFilePos, fileTPColl);
tp->ReadFromFile(fileTP);
currFilePos += sizeOtherInfo;
m_coll.push_back(tp);
}
}
uint64_t TargetPhraseCollection::GetFilePos() const
{
return m_filePos;
}
const std::string TargetPhraseCollection::GetDebugStr() const
{
return m_debugStr;
}
void TargetPhraseCollection::SetDebugStr(const std::string &str)
{
m_debugStr = str;
}
const TargetPhrase &TargetPhraseCollection::GetTargetPhrase(size_t ind) const
{
assert(ind < GetSize());
return *m_coll[ind];
}
}

View File

@ -0,0 +1,84 @@
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#pragma once
#include "TargetPhrase.h"
#include "Vocab.h"
#include <boost/shared_ptr.hpp>
namespace Moses
{
class TargetPhraseCollection;
class PhraseDictionary;
}
namespace OnDiskPt
{
/** A vector of target phrases
*/
class TargetPhraseCollection
{
class TargetPhraseOrderByScore
{
public:
bool operator()(const TargetPhrase* a, const TargetPhrase *b) const {
return a->GetScore(s_sortScoreInd) > b->GetScore(s_sortScoreInd);
}
};
protected:
typedef std::vector<TargetPhrase*> CollType;
CollType m_coll;
uint64_t m_filePos;
std::string m_debugStr;
public:
typedef boost::shared_ptr<TargetPhraseCollection const> shared_const_ptr;
typedef boost::shared_ptr<TargetPhraseCollection> shared_ptr;
static size_t s_sortScoreInd;
TargetPhraseCollection();
TargetPhraseCollection(const TargetPhraseCollection &copy);
~TargetPhraseCollection();
void AddTargetPhrase(TargetPhrase *targetPhrase);
void Sort(size_t tableLimit);
void Save(OnDiskWrapper &onDiskWrapper);
size_t GetSize() const {
return m_coll.size();
}
const TargetPhrase &GetTargetPhrase(size_t ind) const;
uint64_t GetFilePos() const;
void ReadFromFile(size_t tableLimit, uint64_t filePos, OnDiskWrapper &onDiskWrapper);
const std::string GetDebugStr() const;
void SetDebugStr(const std::string &str);
};
}

101
OnDiskPt/Vocab.cpp Normal file
View File

@ -0,0 +1,101 @@
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <string>
#include <fstream>
#include "OnDiskWrapper.h"
#include "Vocab.h"
#include "moses/Util.h"
#include "util/exception.hh"
using namespace std;
namespace OnDiskPt
{
bool Vocab::Load(OnDiskWrapper &onDiskWrapper)
{
fstream &file = onDiskWrapper.GetFileVocab();
string line;
while(getline(file, line)) {
vector<string> tokens;
Moses::Tokenize(tokens, line);
UTIL_THROW_IF2(tokens.size() != 2, "Vocab file corrupted");
const string &key = tokens[0];
m_vocabColl[key] = Moses::Scan<uint64_t>(tokens[1]);
}
// create lookup
// assume contiguous vocab id
m_lookup.resize(m_vocabColl.size() + 1);
m_nextId = m_lookup.size();
CollType::const_iterator iter;
for (iter = m_vocabColl.begin(); iter != m_vocabColl.end(); ++iter) {
uint32_t vocabId = iter->second;
const std::string &word = iter->first;
m_lookup[vocabId] = word;
}
return true;
}
void Vocab::Save(OnDiskWrapper &onDiskWrapper)
{
fstream &file = onDiskWrapper.GetFileVocab();
CollType::const_iterator iterVocab;
for (iterVocab = m_vocabColl.begin(); iterVocab != m_vocabColl.end(); ++iterVocab) {
const string &word = iterVocab->first;
uint32_t vocabId = iterVocab->second;
file << word << " " << vocabId << endl;
}
}
uint64_t Vocab::AddVocabId(const std::string &str)
{
// find string id
CollType::const_iterator iter = m_vocabColl.find(str);
if (iter == m_vocabColl.end()) {
// add new vocab entry
m_vocabColl[str] = m_nextId;
return m_nextId++;
} else {
// return existing entry
return iter->second;
}
}
uint64_t Vocab::GetVocabId(const std::string &str, bool &found) const
{
// find string id
CollType::const_iterator iter = m_vocabColl.find(str);
if (iter == m_vocabColl.end()) {
found = false;
return 0; //return whatever
} else {
// return existing entry
found = true;
return iter->second;
}
}
}

58
OnDiskPt/Vocab.h Normal file
View File

@ -0,0 +1,58 @@
#pragma once
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <string>
#include <map>
#include "moses/TypeDef.h"
namespace OnDiskPt
{
class OnDiskWrapper;
/* A bidirectional map of string<->contiguous id
* No distinction between source and target language
*/
class Vocab
{
protected:
typedef std::map<std::string, uint64_t> CollType;
CollType m_vocabColl;
std::vector<std::string> m_lookup; // opposite of m_vocabColl
uint64_t m_nextId; // starts @ 1
public:
Vocab()
:m_nextId(1) {
}
uint64_t AddVocabId(const std::string &str);
uint64_t GetVocabId(const std::string &str, bool &found) const;
const std::string &GetString(uint64_t vocabId) const {
return m_lookup[vocabId];
}
bool Load(OnDiskWrapper &onDiskWrapper);
void Save(OnDiskWrapper &onDiskWrapper);
};
}

144
OnDiskPt/Word.cpp Normal file
View File

@ -0,0 +1,144 @@
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <boost/algorithm/string/predicate.hpp>
#include "moses/Util.h"
#include "Word.h"
#include "util/tokenize_piece.hh"
#include "util/exception.hh"
using namespace std;
using namespace boost::algorithm;
namespace OnDiskPt
{
Word::Word(const Word &copy)
:m_isNonTerminal(copy.m_isNonTerminal)
,m_vocabId(copy.m_vocabId)
{}
Word::~Word()
{}
void Word::CreateFromString(const std::string &inString, Vocab &vocab)
{
if (starts_with(inString, "[") && ends_with(inString, "]")) {
// non-term
m_isNonTerminal = true;
string str = inString.substr(1, inString.size() - 2);
m_vocabId = vocab.AddVocabId(str);
} else {
m_isNonTerminal = false;
m_vocabId = vocab.AddVocabId(inString);
}
}
size_t Word::WriteToMemory(char *mem) const
{
uint64_t *vocabMem = (uint64_t*) mem;
vocabMem[0] = m_vocabId;
size_t size = sizeof(uint64_t);
// is non-term
char bNonTerm = (char) m_isNonTerminal;
mem[size] = bNonTerm;
++size;
return size;
}
size_t Word::ReadFromMemory(const char *mem)
{
uint64_t *vocabMem = (uint64_t*) mem;
m_vocabId = vocabMem[0];
size_t memUsed = sizeof(uint64_t);
// is non-term
char bNonTerm;
bNonTerm = mem[memUsed];
m_isNonTerminal = (bool) bNonTerm;
++memUsed;
return memUsed;
}
size_t Word::ReadFromFile(std::fstream &file)
{
const size_t memAlloc = sizeof(uint64_t) + sizeof(char);
char mem[sizeof(uint64_t) + sizeof(char)];
file.read(mem, memAlloc);
size_t memUsed = ReadFromMemory(mem);
assert(memAlloc == memUsed);
return memAlloc;
}
int Word::Compare(const Word &compare) const
{
int ret;
if (m_isNonTerminal != compare.m_isNonTerminal)
return m_isNonTerminal ?-1 : 1;
if (m_vocabId < compare.m_vocabId)
ret = -1;
else if (m_vocabId > compare.m_vocabId)
ret = 1;
else
ret = 0;
return ret;
}
bool Word::operator<(const Word &compare) const
{
int ret = Compare(compare);
return ret < 0;
}
bool Word::operator==(const Word &compare) const
{
int ret = Compare(compare);
return ret == 0;
}
void Word::DebugPrint(ostream &out, const Vocab &vocab) const
{
const string &str = vocab.GetString(m_vocabId);
out << str;
}
std::ostream& operator<<(std::ostream &out, const Word &word)
{
out << "(";
out << word.m_vocabId;
out << (word.m_isNonTerminal ? "n" : "t");
out << ")";
return out;
}
}

91
OnDiskPt/Word.h Normal file
View File

@ -0,0 +1,91 @@
#pragma once
// $Id$
/***********************************************************************
Moses - factored phrase-based, hierarchical and syntactic language decoder
Copyright (C) 2009 Hieu Hoang
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <string>
#include <vector>
#include <iostream>
#include <fstream>
#include <boost/shared_ptr.hpp>
#include "Vocab.h"
namespace Moses
{
class Word;
}
namespace OnDiskPt
{
class Vocab;
/* A wrapper around a vocab id, and a boolean indicating whther it is a term or non-term.
* Factors can be represented by using a vocab string with | character, eg go|VB
*/
class Word
{
friend std::ostream& operator<<(std::ostream&, const Word&);
private:
bool m_isNonTerminal;
uint64_t m_vocabId;
public:
explicit Word() {
}
explicit Word(bool isNonTerminal)
:m_isNonTerminal(isNonTerminal)
,m_vocabId(0) {
}
Word(const Word &copy);
~Word();
void CreateFromString(const std::string &inString, Vocab &vocab);
bool IsNonTerminal() const {
return m_isNonTerminal;
}
size_t WriteToMemory(char *mem) const;
size_t ReadFromMemory(const char *mem);
size_t ReadFromFile(std::fstream &file);
uint64_t GetVocabId() const {
return m_vocabId;
}
void SetVocabId(uint64_t vocabId) {
m_vocabId = vocabId;
}
void DebugPrint(std::ostream &out, const Vocab &vocab) const;
inline const std::string &GetString(const Vocab &vocab) const {
return vocab.GetString(m_vocabId);
}
int Compare(const Word &compare) const;
bool operator<(const Word &compare) const;
bool operator==(const Word &compare) const;
};
typedef boost::shared_ptr<Word> WordPtr;
}

View File

@ -0,0 +1,86 @@
// Query binary phrase tables.
// Christian Hardmeier, 16 May 2010
#include <cstdlib>
#include <cstring>
#include <string>
#include <vector>
#include "moses/Util.h"
#include "OnDiskWrapper.h"
#include "SourcePhrase.h"
#include "OnDiskQuery.h"
using namespace std;
using namespace OnDiskPt;
void usage();
typedef unsigned int uint;
int main(int argc, char **argv)
{
int tableLimit = 20;
std::string ttable = "";
// bool useAlignments = false;
for(int i = 1; i < argc; i++) {
if(!strcmp(argv[i], "-tlimit")) {
if(i + 1 == argc)
usage();
tableLimit = atoi(argv[++i]);
} else if(!strcmp(argv[i], "-t")) {
if(i + 1 == argc)
usage();
ttable = argv[++i];
} else
usage();
}
if(ttable == "")
usage();
OnDiskWrapper onDiskWrapper;
onDiskWrapper.BeginLoad(ttable);
OnDiskQuery onDiskQuery(onDiskWrapper);
cerr << "Ready..." << endl;
std::string line;
while(getline(std::cin, line)) {
std::vector<std::string> tokens;
tokens = Moses::Tokenize(line, " ");
cerr << "line: " << line << endl;
const PhraseNode* node = onDiskQuery.Query(tokens);
if (node) {
// source phrase points to a bunch of rules
TargetPhraseCollection::shared_ptr coll = node->GetTargetPhraseCollection(tableLimit, onDiskWrapper);
string str = coll->GetDebugStr();
cout << "Found " << coll->GetSize() << endl;
for (size_t ind = 0; ind < coll->GetSize(); ++ind) {
const TargetPhrase &targetPhrase = coll->GetTargetPhrase(ind);
cerr << " ";
targetPhrase.DebugPrint(cerr, onDiskWrapper.GetVocab());
cerr << endl;
}
} else {
cout << "Not found" << endl;
}
std::cout << '\n';
std::cout.flush();
}
cerr << "Finished." << endl;
}
void usage()
{
std::cerr << "Usage: queryOnDiskPt [-n <nscores>] [-a] -t <ttable>\n"
"-tlimit <table limit> max number of rules per source phrase (default: 20)\n"
"-t <ttable> phrase table\n";
exit(1);
}

222
biconcor/Alignment.cpp Normal file
View File

@ -0,0 +1,222 @@
#include "Alignment.h"
#include <fstream>
#include <string>
#include <cstdlib>
#include <cstring>
namespace
{
const int LINE_MAX_LENGTH = 10000;
} // namespace
using namespace std;
void Alignment::Create(const string& fileName)
{
ifstream textFile;
char line[LINE_MAX_LENGTH];
// count the number of words first;
textFile.open(fileName.c_str());
if (!textFile) {
cerr << "No such file or directory: " << fileName << endl;
exit(1);
}
istream *fileP = &textFile;
m_size = 0;
m_sentenceCount = 0;
while(!fileP->eof()) {
SAFE_GETLINE((*fileP), line, LINE_MAX_LENGTH, '\n');
if (fileP->eof()) break;
vector<string> alignmentSequence = Tokenize( line );
m_size += alignmentSequence.size();
m_sentenceCount++;
}
textFile.close();
cerr << m_size << " alignment points" << endl;
// allocate memory
m_array = (int*) calloc( sizeof(int), m_size*2 );
m_sentenceEnd = (INDEX*) calloc( sizeof( INDEX ), m_sentenceCount );
if (m_array == NULL) {
cerr << "Error: cannot allocate memory to m_array" << endl;
exit(1);
}
if (m_sentenceEnd == NULL) {
cerr << "Error: cannot allocate memory to m_sentenceEnd" << endl;
exit(1);
}
// fill the array
int alignmentPointIndex = 0;
int sentenceId = 0;
textFile.open(fileName.c_str());
if (!textFile) {
cerr << "Failed to open " << fileName << endl;
exit(1);
}
fileP = &textFile;
while(!fileP->eof()) {
SAFE_GETLINE((*fileP), line, LINE_MAX_LENGTH, '\n');
if (fileP->eof()) break;
vector<string> alignmentSequence = Tokenize( line );
for(size_t i=0; i<alignmentSequence.size(); i++) {
int s,t;
// cout << "scaning " << alignmentSequence[i].c_str() << endl;
if (! sscanf(alignmentSequence[i].c_str(), "%d-%d", &s, &t)) {
cerr << "WARNING: " << alignmentSequence[i] << " is a bad alignment point in sentence " << sentenceId << endl;
}
m_array[alignmentPointIndex++] = (char) s;
m_array[alignmentPointIndex++] = (char) t;
}
m_sentenceEnd[ sentenceId++ ] = alignmentPointIndex - 2;
}
textFile.close();
cerr << "done reading " << (alignmentPointIndex/2) << " alignment points, " << sentenceId << " sentences." << endl;
}
Alignment::Alignment()
: m_array(NULL),
m_sentenceEnd(NULL),
m_size(0),
m_sentenceCount(0) {}
Alignment::~Alignment()
{
if (m_array != NULL) {
free(m_array);
}
if (m_sentenceEnd != NULL) {
free(m_sentenceEnd);
}
}
vector<string> Alignment::Tokenize( const char input[] )
{
vector< string > token;
bool betweenWords = true;
int start=0;
int i=0;
for(; input[i] != '\0'; i++) {
bool isSpace = (input[i] == ' ' || input[i] == '\t');
if (!isSpace && betweenWords) {
start = i;
betweenWords = false;
} else if (isSpace && !betweenWords) {
token.push_back( string( input+start, i-start ) );
betweenWords = true;
}
}
if (!betweenWords)
token.push_back( string( input+start, i-start ) );
return token;
}
bool Alignment::PhraseAlignment( INDEX sentence, int target_length,
int source_start, int source_end,
int &target_start, int &target_end,
int &pre_null, int &post_null )
{
// get index for first alignment point
INDEX sentenceStart = 0;
if (sentence > 0) {
sentenceStart = m_sentenceEnd[ sentence-1 ] + 2;
}
// get target phrase boundaries
target_start = target_length;
target_end = 0;
for(INDEX ap = sentenceStart; ap <= m_sentenceEnd[ sentence ]; ap += 2 ) {
int source = m_array[ ap ];
if (source >= source_start && source <= source_end ) {
int target = m_array[ ap+1 ];
if (target < target_start) target_start = target;
if (target > target_end ) target_end = target;
}
}
if (target_start == target_length) {
return false; // done if no alignment points
}
// check consistency
for(INDEX ap = sentenceStart; ap <= m_sentenceEnd[ sentence ]; ap += 2 ) {
int target = m_array[ ap+1 ];
if (target >= target_start && target <= target_end ) {
int source = m_array[ ap ];
if (source < source_start || source > source_end) {
return false; // alignment point out of range
}
}
}
// create array for unaligned words
for( int i=0; i<target_length; i++ ) {
m_unaligned[i] = true;
}
for(INDEX ap = sentenceStart; ap <= m_sentenceEnd[ sentence ]; ap += 2 ) {
int target = m_array[ ap+1 ];
m_unaligned[ target ] = false;
}
// prior unaligned words
pre_null = 0;
for(int target = target_start-1; target >= 0 && m_unaligned[ target ]; target--) {
pre_null++;
}
// post unaligned words;
post_null = 0;
for(int target = target_end+1; target < target_length && m_unaligned[ target ]; target++) {
post_null++;
}
return true;
}
void Alignment::Save(const string& fileName ) const
{
FILE *pFile = fopen ( (fileName + ".align").c_str() , "w" );
if (pFile == NULL) {
cerr << "Cannot open " << fileName << ".align" << endl;
exit(1);
}
fwrite( &m_size, sizeof(INDEX), 1, pFile );
fwrite( m_array, sizeof(int), m_size*2, pFile ); // corpus
fwrite( &m_sentenceCount, sizeof(INDEX), 1, pFile );
fwrite( m_sentenceEnd, sizeof(INDEX), m_sentenceCount, pFile); // sentence index
fclose( pFile );
}
void Alignment::Load(const string& fileName )
{
FILE *pFile = fopen ( (fileName + ".align").c_str() , "r" );
if (pFile == NULL) {
cerr << "no such file or directory: " << fileName << ".align" << endl;
exit(1);
}
cerr << "loading from " << fileName << ".align" << endl;
fread( &m_size, sizeof(INDEX), 1, pFile );
cerr << "alignment points in corpus: " << m_size << endl;
m_array = (int*) calloc( sizeof(int), m_size*2 );
fread( m_array, sizeof(int), m_size*2, pFile ); // corpus
fread( &m_sentenceCount, sizeof(INDEX), 1, pFile );
cerr << "sentences in corpus: " << m_sentenceCount << endl;
m_sentenceEnd = (INDEX*) calloc( sizeof(INDEX), m_sentenceCount );
fread( m_sentenceEnd, sizeof(INDEX), m_sentenceCount, pFile); // sentence index
fclose( pFile );
cerr << "done loading\n";
}

47
biconcor/Alignment.h Normal file
View File

@ -0,0 +1,47 @@
#pragma once
#include "Vocabulary.h"
class Alignment
{
public:
typedef unsigned int INDEX;
private:
int *m_array;
INDEX *m_sentenceEnd;
INDEX m_size;
INDEX m_sentenceCount;
char m_unaligned[ 256 ]; // here for speed (local to PhraseAlignment)
// No copying allowed.
Alignment(const Alignment&);
void operator=(const Alignment&);
public:
Alignment();
~Alignment();
void Create(const std::string& fileName );
bool PhraseAlignment( INDEX sentence, int target_length,
int source_start, int source_end,
int &target_start, int &target_end,
int &pre_null, int &post_null );
void Load(const std::string& fileName );
void Save(const std::string& fileName ) const;
std::vector<std::string> Tokenize( const char input[] );
INDEX GetSentenceStart( INDEX sentence ) const {
if (sentence == 0) return 0;
return m_sentenceEnd[ sentence-1 ] + 2;
}
INDEX GetNumberOfAlignmentPoints( INDEX sentence ) const {
return ( m_sentenceEnd[ sentence ] - GetSentenceStart( sentence ) ) / 2;
}
int GetSourceWord( INDEX sentence, INDEX alignment_point ) const {
return m_array[ GetSentenceStart( sentence ) + alignment_point*2 ];
}
int GetTargetWord( INDEX sentence, INDEX alignment_point ) const {
return m_array[ GetSentenceStart( sentence ) + alignment_point*2 + 1 ];
}
};

5
biconcor/CMakeLists.txt Normal file
View File

@ -0,0 +1,5 @@
project(biconcor)
FILE(GLOB biconcor_source *.cpp)
add_executable(biconcor ${biconcor_source})

2
biconcor/Jamfile Normal file
View File

@ -0,0 +1,2 @@
exe biconcor : Vocabulary.cpp SuffixArray.cpp TargetCorpus.cpp Alignment.cpp Mismatch.cpp PhrasePair.cpp PhrasePairCollection.cpp biconcor.cpp base64.cpp ;
exe phrase-lookup : Vocabulary.cpp SuffixArray.cpp phrase-lookup.cpp ;

292
biconcor/Mismatch.cpp Normal file
View File

@ -0,0 +1,292 @@
#include "Mismatch.h"
#include <fstream>
#include <iostream>
#include <cstring>
#include <string>
#include <cstdlib>
#include "SuffixArray.h"
#include "TargetCorpus.h"
#include "Alignment.h"
#include "Vocabulary.h"
using namespace std;
enum {
UNANNOTATED = 0,
PRE_ALIGNED = 1,
POST_ALIGNED = 2,
UNALIGNED = 3,
MISALIGNED = 4,
ALIGNED = 5
};
Mismatch::Mismatch( SuffixArray *sa, TargetCorpus *tc, Alignment *a, INDEX sentence_id, INDEX position, int source_length, int target_length, int source_start, int source_end )
:m_suffixArray(sa)
,m_targetCorpus(tc)
,m_alignment(a)
,m_sentence_id(sentence_id)
,m_source_length(source_length)
,m_target_length(target_length)
,m_source_position(position)
,m_source_start(source_start)
,m_source_end(source_end)
,m_unaligned(true)
{
// initialize unaligned indexes
for (int i = 0; i < m_source_length; i++) {
m_source_unaligned[i] = true;
}
for (int i = 0; i < m_target_length; i++) {
m_target_unaligned[i] = true;
}
m_num_alignment_points =
m_alignment->GetNumberOfAlignmentPoints( sentence_id );
for(INDEX ap=0; ap<m_num_alignment_points; ap++) {
m_source_unaligned[ (int)m_alignment->GetSourceWord( sentence_id, ap ) ] = false;
m_target_unaligned[ (int)m_alignment->GetTargetWord( sentence_id, ap ) ] = false;
}
for(int i = source_start; i <= source_end; i++) {
if (!m_source_unaligned[ i ]) {
m_unaligned = false;
}
}
}
Mismatch::~Mismatch () {}
void Mismatch::PrintClippedHTML( ostream* out, int width )
{
int source_annotation[256], target_annotation[256];
vector< string > label_class;
label_class.push_back( "" );
label_class.push_back( "mismatch_pre_aligned" );
label_class.push_back( "mismatch_post_aligned" );
label_class.push_back( "null_aligned" );
label_class.push_back( "mismatch_misaligned" );
label_class.push_back( "mismatch_aligned" );
for(int i=0; i<m_source_length; i++) source_annotation[i] = UNANNOTATED;
for(int i=0; i<m_target_length; i++) target_annotation[i] = UNANNOTATED;
if (m_unaligned) {
// find alignment points for prior and next word(s) and
// center target phrase around those.
bool found_aligned = false;
for(int i=1; i<m_source_length && !found_aligned; i++) {
if (m_source_start-i >= 0) {
int word_id = m_source_start-i;
source_annotation[ word_id ] = UNALIGNED;
if (!m_source_unaligned[ word_id ]) {
found_aligned = true;
LabelSourceMatches( source_annotation, target_annotation, word_id, PRE_ALIGNED );
}
}
if (m_source_end+i < m_source_length) {
int word_id = m_source_end+i;
source_annotation[ word_id ] = UNALIGNED;
if (!m_source_unaligned[ word_id ]) {
found_aligned = true;
LabelSourceMatches( source_annotation, target_annotation, word_id, POST_ALIGNED );
}
}
}
}
// misalignment
else {
// label aligned output words
for(int i=m_source_start; i<=m_source_end; i++)
LabelSourceMatches( source_annotation, target_annotation, i, ALIGNED );
// find first and last
int target_start = -1;
int target_end = -1;
for(int i=0; i<m_target_length; i++)
if (target_annotation[i] == ALIGNED) {
if (target_start == -1)
target_start = i;
target_end = i;
}
// go over all enclosed target words
for(int i=target_start; i<=target_end; i++) {
// label other target words as unaligned or misaligned
if (m_target_unaligned[ i ])
target_annotation[ i ] = UNALIGNED;
else {
if (target_annotation[ i ] != ALIGNED)
target_annotation[ i ] = MISALIGNED;
// loop over aligned source words
for(INDEX ap=0; ap<m_num_alignment_points; ap++) {
if (m_alignment->GetTargetWord( m_sentence_id, ap ) == i) {
int source_word = m_alignment->GetSourceWord( m_sentence_id, ap );
// if not part of the source phrase -> also misaligned
if (source_word < m_source_start || source_word > m_source_end)
source_annotation[ source_word ] = MISALIGNED;
}
}
}
}
// closure
bool change = true;
while(change) {
change = false;
for(INDEX ap=0; ap<m_num_alignment_points; ap++) {
int source_word = m_alignment->GetSourceWord( m_sentence_id, ap );
int target_word = m_alignment->GetTargetWord( m_sentence_id, ap );
if (source_annotation[source_word] != UNANNOTATED &&
target_annotation[target_word] == UNANNOTATED) {
target_annotation[target_word] = MISALIGNED;
change = true;
}
if (source_annotation[source_word] == UNANNOTATED &&
target_annotation[target_word] != UNANNOTATED) {
source_annotation[source_word] = MISALIGNED;
change = true;
}
}
}
}
// print source
// shorten source context if too long
int sentence_start = m_source_position - m_source_start;
int context_space = width/2;
for(int i=m_source_start; i<=m_source_end; i++)
context_space -= m_suffixArray->GetWord( sentence_start + i ).size() + 1;
context_space /= 2;
int remaining = context_space;
int start_word = m_source_start;
for(; start_word>0 && remaining>0; start_word--)
remaining -= m_suffixArray->GetWord( sentence_start + start_word-1 ).size() + 1;
if (remaining<0 || start_word == -1) start_word++;
remaining = context_space;
int end_word = m_source_end;
for(; end_word<m_source_length && remaining>0; end_word++)
remaining -= m_suffixArray->GetWord( sentence_start + end_word ).size() + 1;
end_word--;
// output with markup
*out << "<tr><td class=\"pp_source_left\">";
char current_label = UNANNOTATED;
if (start_word>0) {
current_label = source_annotation[start_word-1];
*out << "... ";
}
for(int i=start_word; i<=end_word; i++) {
// change to phrase block
if (i == m_source_start) {
if (current_label != UNANNOTATED && i!=start_word)
*out << "</span>";
*out << "</td><td class=\"pp_source\">";
current_label = UNANNOTATED;
}
// change to labeled word
else if (source_annotation[i] != current_label &&
source_annotation[i] != ALIGNED) {
if (current_label != UNANNOTATED && i!=start_word)
*out << "</span>";
if (source_annotation[i] != UNANNOTATED)
*out << "<span class=\""
<< label_class[ source_annotation[i] ]
<< "\">";
current_label = source_annotation[i];
}
// output word
*out << m_suffixArray->GetWord( sentence_start + i ) << " ";
// change to right context block
if (i == m_source_end) {
*out << "</td><td class=\"pp_source_right\">";
current_label = UNANNOTATED;
}
}
if (current_label != UNANNOTATED && end_word>m_source_end)
*out << "</span>";
if (end_word<m_source_length-1)
*out << "... ";
// print target
// shorten target context if too long
int target_start = -1;
int target_end=0;
for(int i=0; i<m_target_length; i++)
if (target_annotation[i] != UNANNOTATED) {
if (target_start == -1)
target_start = i;
target_end = i;
}
context_space = width/2;
for(int i=target_start; i<=target_end; i++)
context_space -= m_targetCorpus->GetWord( m_sentence_id, i ).size() + 1;
while (context_space < 0) { // shorten matched part, if too long
context_space +=
m_targetCorpus->GetWord( m_sentence_id, target_start ).size() +
m_targetCorpus->GetWord( m_sentence_id, target_end ).size() + 2;
target_start++;
target_end--;
}
context_space /= 2;
remaining = context_space;
start_word = target_start;
for(; start_word>0 && remaining>0; start_word--) {
//cerr << "remaining: " << remaining << ", start_word: " << start_word << endl;
remaining -= m_targetCorpus->GetWord( m_sentence_id, start_word-1 ).size() + 1;
}
if (remaining<0 || start_word == -1) start_word++;
remaining = context_space;
end_word = target_end;
for(; end_word<m_target_length && remaining>0; end_word++) {
//cerr << "remaining: " << remaining << ", end_word: " << end_word << endl;
remaining -= m_targetCorpus->GetWord( m_sentence_id, end_word ).size() + 1;
}
end_word--;
// output with markup
*out << "</td><td class=\"mismatch_target\">";
current_label = UNANNOTATED;
if (start_word>0) {
current_label = target_annotation[start_word-1];
*out << "... ";
}
for(int i=start_word; i<=end_word; i++) {
if (target_annotation[i] != current_label) {
if (current_label != UNANNOTATED && i!=start_word)
*out << "</span>";
if (target_annotation[i] != UNANNOTATED)
*out << "<span class=\""
<< label_class[ target_annotation[i] ]
<< "\">";
current_label = target_annotation[i];
}
// output word
*out << m_targetCorpus->GetWord( m_sentence_id, i ) << " ";
}
if (current_label != UNANNOTATED && end_word>target_end)
*out << "</span>";
if (end_word<m_target_length-1)
*out << "... ";
*out << "</td></tr>";
}
void Mismatch::LabelSourceMatches(int *source_annotation, int *target_annotation, int source_id, int label )
{
for(INDEX ap=0; ap<m_num_alignment_points; ap++) {
if (m_alignment->GetSourceWord( m_sentence_id, ap ) == source_id) {
source_annotation[ source_id ] = label;
target_annotation[ m_alignment->GetTargetWord( m_sentence_id, ap ) ] = label;
}
}
}

42
biconcor/Mismatch.h Normal file
View File

@ -0,0 +1,42 @@
#pragma once
#include <iosfwd>
class Alignment;
class SuffixArray;
class TargetCorpus;
class Mismatch
{
public:
typedef unsigned int INDEX;
private:
SuffixArray *m_suffixArray;
TargetCorpus *m_targetCorpus;
Alignment *m_alignment;
INDEX m_sentence_id;
INDEX m_num_alignment_points;
int m_source_length;
int m_target_length;
INDEX m_source_position;
int m_source_start;
int m_source_end;
bool m_source_unaligned[ 256 ];
bool m_target_unaligned[ 256 ];
bool m_unaligned;
// No copying allowed.
Mismatch(const Mismatch&);
void operator=(const Mismatch&);
public:
Mismatch( SuffixArray *sa, TargetCorpus *tc, Alignment *a, INDEX sentence_id, INDEX position, int source_length, int target_length, int source_start, int source_end );
~Mismatch();
bool Unaligned() const {
return m_unaligned;
}
void PrintClippedHTML(std::ostream* out, int width );
void LabelSourceMatches(int *source_annotation, int *target_annotation, int source_id, int label );
};

300
biconcor/PhrasePair.cpp Normal file
View File

@ -0,0 +1,300 @@
#include "PhrasePair.h"
#include <iostream>
#include "TargetCorpus.h"
#include "Alignment.h"
#include "Vocabulary.h"
#include "SuffixArray.h"
using namespace std;
void PhrasePair::Print( ostream* out ) const
{
// source
int sentence_start = m_source_position - m_source_start;
char source_length = m_suffixArray->GetSentenceLength( m_suffixArray->GetSentence( m_source_position ) );
for( char i=0; i<source_length; i++ ) {
if (i>0) *out << " ";
*out << m_suffixArray->GetWord( sentence_start + i );
}
// target
*out << " |||";
for( char i=0; i<m_target_length; i++ ) {
*out << " " << m_targetCorpus->GetWord( m_sentence_id, i);
}
// source span
*out << " ||| " << (int)m_source_start << " " << (int)m_source_end;
// target span
*out << " ||| " << (int)m_target_start << " " << (int)m_target_end;
// word alignment
*out << " |||";
INDEX ap_points = m_alignment->GetNumberOfAlignmentPoints( m_sentence_id );
for( INDEX i=0; i<ap_points; i++) {
*out << " " << m_alignment->GetSourceWord( m_sentence_id, i )
<< "-" << m_alignment->GetTargetWord( m_sentence_id, i );
}
*out << endl;
}
void PhrasePair::PrintPretty( ostream* out, int width ) const
{
vector< WORD_ID >::const_iterator t;
// source
int sentence_start = m_source_position - m_source_start;
size_t source_width = (width-3)/2;
string source_pre = "";
string source = "";
string source_post = "";
for( size_t space=0; space<source_width/2; space++ ) source_pre += " ";
for( char i=0; i<m_source_start; i++ ) {
source_pre += " " + m_suffixArray->GetWord( sentence_start + i );
}
for( char i=m_source_start; i<=m_source_end; i++ ) {
if (i>m_source_start) source += " ";
source += m_suffixArray->GetWord( sentence_start + i );
}
char source_length = m_suffixArray->GetSentenceLength( m_suffixArray->GetSentence( m_source_position ) );
for( char i=m_source_end+1; i<source_length; i++ ) {
if (i>m_source_end+1) source_post += " ";
source_post += m_suffixArray->GetWord( sentence_start + i );
}
for( size_t space=0; space<source_width/2; space++ ) source_post += " ";
size_t source_pre_width = (source_width-source.size()-2)/2;
size_t source_post_width = (source_width-source.size()-2+1)/2;
if (source.size() > (size_t)width) {
source_pre_width = 0;
source_post_width = 0;
}
*out << source_pre.substr( source_pre.size()-source_pre_width, source_pre_width ) << " "
<< source.substr( 0, source_width -2 ) << " "
<< source_post.substr( 0, source_post_width ) << " | ";
// target
size_t target_width = (width-3)/2;
string target_pre = "";
string target = "";
string target_post = "";
for( size_t space=0; space<target_width/2; space++ ) target_pre += " ";
for( char i=0; i<m_target_start; i++ ) {
target_pre += " " + m_targetCorpus->GetWord( m_sentence_id, i);
}
for( char i=m_target_start; i<=m_target_end; i++ ) {
if (i>m_target_start) target += " ";
target += m_targetCorpus->GetWord( m_sentence_id, i);
}
for( char i=m_target_end+1; i<m_target_length; i++ ) {
if (i>m_target_end+1) target_post += " ";
target_post += m_targetCorpus->GetWord( m_sentence_id, i);
}
size_t target_pre_width = (target_width-target.size()-2)/2;
size_t target_post_width = (target_width-target.size()-2+1)/2;
if (target.size() > (size_t)width) {
target_pre_width = 0;
target_post_width = 0;
}
*out << target_pre.substr( target_pre.size()-target_pre_width, target_pre_width ) << " "
<< target.substr( 0, target_width -2 ) << " "
<< target_post.substr( 0, target_post_width ) << endl;
}
void PhrasePair::PrintTarget( ostream* out ) const
{
for( char i=m_target_start; i<=m_target_end; i++ ) {
if (i>m_target_start) *out << " ";
*out << m_targetCorpus->GetWord( m_sentence_id, i);
}
}
void PhrasePair::PrintHTML( ostream* out ) const
{
// source
int sentence_start = m_source_position - m_source_start;
char source_length = m_suffixArray->GetSentenceLength( m_suffixArray->GetSentence( m_source_position ) );
*out << "<tr><td align=right class=\"pp_source_left\">";
for( char i=0; i<m_source_start; i++ ) {
if (i>0) *out << " ";
*out << m_suffixArray->GetWord( sentence_start + i );
}
*out << "</td><td class=\"pp_source\">";
for( char i=m_source_start; i<=m_source_end; i++ ) {
if (i>m_source_start) *out << " ";
*out << m_suffixArray->GetWord( sentence_start + i );
}
*out << "</td><td class=\"pp_source_right\">";
for( char i=m_source_end+1; i<source_length; i++ ) {
if (i>m_source_end+1) *out << " ";
*out << m_suffixArray->GetWord( sentence_start + i );
}
// target
*out << "</td><td class=\"pp_target_left\">";
for( char i=0; i<m_target_start; i++ ) {
if (i>0) *out << " ";
*out << m_targetCorpus->GetWord( m_sentence_id, i);
}
*out << "</td><td class=\"pp_target\">";
for( char i=m_target_start; i<=m_target_end; i++ ) {
if (i>m_target_start) *out << " ";
*out << m_targetCorpus->GetWord( m_sentence_id, i);
}
*out << "</td><td class=\"pp_target_right\">";
for( char i=m_target_end+1; i<m_target_length; i++ ) {
if (i>m_target_end+1) *out << " ";
*out << m_targetCorpus->GetWord( m_sentence_id, i);
}
*out << "</td></tr>\n";
}
void PhrasePair::PrintClippedHTML( ostream* out, int width ) const
{
vector< WORD_ID >::const_iterator t;
// source
int sentence_start = m_source_position - m_source_start;
size_t source_width = (width+1)/2;
string source_pre = "";
string source = "";
string source_post = "";
for( char i=0; i<m_source_start; i++ ) {
source_pre += " " + m_suffixArray->GetWord( sentence_start + i );
}
for( char i=m_source_start; i<=m_source_end; i++ ) {
if (i>m_source_start) source += " ";
source += m_suffixArray->GetWord( sentence_start + i );
}
char source_length = m_suffixArray->GetSentenceLength( m_suffixArray->GetSentence( m_source_position ) );
for( char i=m_source_end+1; i<source_length; i++ ) {
if (i>m_source_end+1) source_post += " ";
source_post += m_suffixArray->GetWord( sentence_start + i );
}
size_t source_pre_width = (source_width-source.size())/2;
size_t source_post_width = (source_width-source.size()+1)/2;
// if phrase is too long, don't show any context
if (source.size() > (size_t)width) {
source_pre_width = 0;
source_post_width = 0;
}
// too long -> truncate and add "..."
if (source_pre.size() > source_pre_width) {
// first skip up to a space
while(source_pre_width>0 &&
source_pre.substr(source_pre.size()-source_pre_width,1) != " ") {
source_pre_width--;
}
source_pre = "..." + source_pre.substr( source_pre.size()-source_pre_width, source_pre_width );
}
if (source_post.size() > source_post_width) {
while(source_post_width>0 &&
source_post.substr(source_post_width-1,1) != " ") {
source_post_width--;
}
source_post = source_post.substr( 0, source_post_width ) + "...";
}
*out << "<tr><td class=\"pp_source_left\">"
<< source_pre
<< "</td><td class=\"pp_source\">"
<< source.substr( 0, source_width -2 )
<< "</td><td class=\"pp_source_right\">"
<< source_post
<< "</td>";
// target
size_t target_width = width/2;
string target_pre = "";
string target = "";
string target_post = "";
size_t target_pre_null_width = 0;
size_t target_post_null_width = 0;
for( char i=0; i<m_target_start; i++ ) {
WORD word = m_targetCorpus->GetWord( m_sentence_id, i);
target_pre += " " + word;
if (i >= m_target_start-m_pre_null)
target_pre_null_width += word.size() + 1;
}
for( char i=m_target_start; i<=m_target_end; i++ ) {
if (i>m_target_start) target += " ";
target += m_targetCorpus->GetWord( m_sentence_id, i);
}
for( char i=m_target_end+1; i<m_target_length; i++ ) {
if (i>m_target_end+1) target_post += " ";
WORD word = m_targetCorpus->GetWord( m_sentence_id, i);
target_post += word;
if (i-(m_target_end+1) < m_post_null) {
target_post_null_width += word.size() + 1;
}
}
size_t target_pre_width = (target_width-target.size())/2;
size_t target_post_width = (target_width-target.size()+1)/2;
if (target.size() > (size_t)width) {
target_pre_width = 0;
target_post_width = 0;
}
if (target_pre.size() < target_pre_width)
target_pre_width = target_pre.size();
else {
while(target_pre_width>0 &&
target_pre.substr(target_pre.size()-target_pre_width,1) != " ") {
target_pre_width--;
}
target_pre = "..." + target_pre.substr( target_pre.size()-target_pre_width, target_pre_width );
}
if (target_post.size() < target_post_width) {
target_post_width = target_post.size();
} else {
while(target_post_width>0 &&
target_post.substr(target_post_width-1,1) != " ") {
target_post_width--;
}
target_post = target_post.substr( 0, target_post_width ) + "...";
}
if (m_pre_null) {
//cerr << endl << "target_pre_width=" << target_pre_width << ", target_pre_null_width=" << target_pre_null_width << ", target_pre.size()=" << target_pre.size() << endl;
if (target_pre_width < target_pre.size())
target_pre_null_width -= target_pre.size()-target_pre_width;
target_pre = target_pre.substr(0,target_pre_width-target_pre_null_width)
+ "<span class=\"null_aligned\">"
+ target_pre.substr(target_pre_width-target_pre_null_width)
+ "</span>";
}
if (m_post_null) {
//cerr << endl << "target_post_width=" << target_post_width << ", target_post_null_width=" << target_post_null_width << ", target_post.size()=" << target_post.size() << endl;
if (target_post_null_width > target_post.size()) {
target_post_null_width = target_post.size();
}
target_post = "<span class=\"null_aligned\">"
+ target_post.substr(0,target_post_null_width)
+ "</span>"
+ target_post.substr(target_post_null_width);
}
*out << "<td class=\"pp_target_left\">"
<< target_pre
<< "</td><td class=\"pp_target\">"
<< target.substr( 0, target_width -2 )
<< "</td><td class=\"pp_target_right\">"
<< target_post
<< "</td></tr>"<< endl;
}

50
biconcor/PhrasePair.h Normal file
View File

@ -0,0 +1,50 @@
#pragma once
#include <iosfwd>
class Alignment;
class SuffixArray;
class TargetCorpus;
class PhrasePair
{
public:
typedef unsigned int INDEX;
private:
SuffixArray *m_suffixArray;
TargetCorpus *m_targetCorpus;
Alignment *m_alignment;
INDEX m_sentence_id;
char m_target_length;
INDEX m_source_position;
char m_source_start, m_source_end;
char m_target_start, m_target_end;
char m_start_null, m_end_null;
char m_pre_null, m_post_null;
public:
PhrasePair( SuffixArray *sa, TargetCorpus *tc, Alignment *a, INDEX sentence_id, char target_length, INDEX position, char source_start, char source_end, char target_start, char target_end, char start_null, char end_null, char pre_null, char post_null)
:m_suffixArray(sa)
,m_targetCorpus(tc)
,m_alignment(a)
,m_sentence_id(sentence_id)
,m_target_length(target_length)
,m_source_position(position)
,m_source_start(source_start)
,m_source_end(source_end)
,m_target_start(target_start)
,m_target_end(target_end)
,m_start_null(start_null)
,m_end_null(end_null)
,m_pre_null(pre_null)
,m_post_null(post_null) {
}
~PhrasePair () {}
void PrintTarget( std::ostream* out ) const;
void Print( std::ostream* out ) const;
void PrintPretty( std::ostream* out, int width ) const;
void PrintHTML( std::ostream* out ) const;
void PrintClippedHTML( std::ostream* out, int width ) const;
};

View File

@ -0,0 +1,209 @@
#include "PhrasePairCollection.h"
#include <cstdlib>
#include <cstring>
#include <algorithm>
#include "Vocabulary.h"
#include "SuffixArray.h"
#include "TargetCorpus.h"
#include "Alignment.h"
#include "PhrasePair.h"
#include "Mismatch.h"
using namespace std;
PhrasePairCollection::PhrasePairCollection( SuffixArray *sa, TargetCorpus *tc, Alignment *a, int max_translation, int max_example )
:m_suffixArray(sa)
,m_targetCorpus(tc)
,m_alignment(a)
,m_size(0)
,m_max_lookup(10000) // maximum number of source occurrences sampled
,m_max_translation(max_translation) // max number of different distinct translations returned
,m_max_example(max_example) // max number of examples returned for each distinct translation
{}
PhrasePairCollection::~PhrasePairCollection()
{}
int PhrasePairCollection::GetCollection( const vector< string >& sourceString )
{
INDEX first_match, last_match;
if (! m_suffixArray->FindMatches( sourceString, first_match, last_match )) {
return 0;
}
//cerr << "\tfirst match " << first_match << endl;
//cerr << "\tlast match " << last_match << endl;
INDEX found = last_match - first_match +1;
map< vector< WORD_ID >, INDEX > index;
int real_count = 0;
for( INDEX i=first_match; i<=last_match; i++ ) {
int position = m_suffixArray->GetPosition( i );
int source_start = m_suffixArray->GetWordInSentence( position );
int source_end = source_start + sourceString.size()-1;
INDEX sentence_id = m_suffixArray->GetSentence( position );
int sentence_length = m_suffixArray->GetSentenceLength( sentence_id );
int target_length = m_targetCorpus->GetSentenceLength( sentence_id );
//cerr << "match " << (i-first_match)
//<< " in sentence " << sentence_id
//<< ", starting at word " << source_start
//<< " of " << sentence_length
//<< ". target sentence has " << target_length << " words.";
int target_start, target_end, pre_null, post_null;
if (m_alignment->PhraseAlignment( sentence_id, target_length, source_start, source_end, target_start, target_end, pre_null, post_null)) {
//cerr << " aligned to [" << (int)target_start << "," << (int)target_end << "]";
//cerr << " +(" << (int)pre_null << "," << (int)post_null << ")";
bool null_boundary_words = false;
for (int pre = 0; pre <= pre_null && (pre == 0 || null_boundary_words); pre++ ) {
for (int post = 0; post <= post_null && (post == 0 || null_boundary_words); post++ ) {
vector< WORD_ID > targetString;
//cerr << "; ";
for (int target = target_start - pre; target <= target_end + post; target++) {
targetString.push_back( m_targetCorpus->GetWordId( sentence_id, target) );
//cerr << m_targetCorpus->GetWord( sentence_id, target) << " ";
}
PhrasePair *phrasePair = new PhrasePair( m_suffixArray, m_targetCorpus, m_alignment, sentence_id, target_length, position, source_start, source_end, target_start-pre, target_end+post, pre, post, pre_null-pre, post_null-post);
// matchCollection.Add( sentence_id, )
if (index.find( targetString ) == index.end()) {
index[targetString] = m_collection.size();
vector< PhrasePair* > emptyVector;
m_collection.push_back( emptyVector );
}
m_collection[ index[targetString] ].push_back( phrasePair );
m_size++;
}
}
} else {
//cerr << "mismatch " << (i-first_match)
// << " in sentence " << sentence_id
// << ", starting at word " << source_start
// << " of " << sentence_length
// << ". target sentence has " << target_length << " words.";
Mismatch *mismatch = new Mismatch( m_suffixArray, m_targetCorpus, m_alignment, sentence_id, position, sentence_length, target_length, source_start, source_end );
if (mismatch->Unaligned())
m_unaligned.push_back( mismatch );
else
m_mismatch.push_back( mismatch );
}
//cerr << endl;
if (found > (INDEX)m_max_lookup) {
i += found/m_max_lookup-1;
}
real_count++;
}
sort(m_collection.begin(), m_collection.end(), CompareBySize());
return real_count;
}
void PhrasePairCollection::Print(bool pretty) const
{
vector< vector<PhrasePair*> >::const_iterator ppWithSameTarget;
int i=0;
for( ppWithSameTarget = m_collection.begin(); ppWithSameTarget != m_collection.end() && i<m_max_translation; i++, ppWithSameTarget++ ) {
(*(ppWithSameTarget->begin()))->PrintTarget( &cout );
int count = ppWithSameTarget->size();
cout << "(" << count << ")" << endl;
vector< PhrasePair* >::const_iterator p = ppWithSameTarget->begin();
for(int j=0; j<ppWithSameTarget->size() && j<m_max_example; j++, p++ ) {
if (pretty) {
(*p)->PrintPretty( &cout, 100 );
} else {
(*p)->Print( &cout );
}
if (ppWithSameTarget->size() > m_max_example) {
p += ppWithSameTarget->size()/m_max_example-1;
}
}
}
}
void PhrasePairCollection::PrintHTML() const
{
int pp_target = 0;
bool singleton = false;
// loop over all translations
vector< vector<PhrasePair*> >::const_iterator ppWithSameTarget;
for( ppWithSameTarget = m_collection.begin(); ppWithSameTarget != m_collection.end() && pp_target<m_max_translation; ppWithSameTarget++, pp_target++ ) {
int count = ppWithSameTarget->size();
if (!singleton) {
if (count == 1) {
singleton = true;
cout << "<p class=\"pp_singleton_header\">singleton"
<< (m_collection.end() - ppWithSameTarget==1?"":"s") << " ("
<< (m_collection.end() - ppWithSameTarget)
<< "/" << m_size << ")</p>";
} else {
cout << "<p class=\"pp_target_header\">";
(*(ppWithSameTarget->begin()))->PrintTarget( &cout );
cout << " (" << count << "/" << m_size << ")" << endl;
cout << "<p><div id=\"pp_" << pp_target << "\">";
}
cout << "<table align=\"center\">";
}
vector< PhrasePair* >::const_iterator p;
// loop over all sentences where translation occurs
int pp=0;
int i=0;
for(p = ppWithSameTarget->begin(); i<10 && pp<count && p != ppWithSameTarget->end(); p++, pp++, i++ ) {
(*p)->PrintClippedHTML( &cout, 160 );
if (count > m_max_example) {
p += count/m_max_example-1;
pp += count/m_max_example-1;
}
}
if (i == 10 && pp < count) {
// extended table
cout << "<tr><td colspan=7 align=center class=\"pp_more\" onclick=\"javascript:document.getElementById('pp_" << pp_target << "').style.display = 'none'; document.getElementById('pp_ext_" << pp_target << "').style.display = 'block';\">(more)</td></tr></table></div>";
cout << "<div id=\"pp_ext_" << pp_target << "\" style=\"display:none;\";\">";
cout << "<table align=\"center\">";
for(i=0, pp=0, p = ppWithSameTarget->begin(); i<m_max_example && pp<count && p != ppWithSameTarget->end(); p++, pp++, i++ ) {
(*p)->PrintClippedHTML( &cout, 160 );
if (count > m_max_example) {
p += count/m_max_example-1;
pp += count/m_max_example-1;
}
}
}
if (!singleton) cout << "</table></div>\n";
if (!singleton && pp_target == 9) {
cout << "<div id=\"pp_toggle\" onclick=\"javascript:document.getElementById('pp_toggle').style.display = 'none'; document.getElementById('pp_additional').style.display = 'block';\">";
cout << "<p class=\"pp_target_header\">(more)</p></div>";
cout << "<div id=\"pp_additional\" style=\"display:none;\";\">";
}
}
if (singleton) cout << "</table></div>\n";
else if (pp_target > 9) cout << "</div>";
size_t max_mismatch = m_max_example/3;
// unaligned phrases
if (m_unaligned.size() > 0) {
cout << "<p class=\"pp_singleton_header\">unaligned"
<< " (" << (m_unaligned.size()) << ")</p>";
cout << "<table align=\"center\">";
int step_size = 1;
if (m_unaligned.size() > max_mismatch)
step_size = (m_unaligned.size()+max_mismatch-1) / max_mismatch;
for(size_t i=0; i<m_unaligned.size(); i+=step_size)
m_unaligned[i]->PrintClippedHTML( &cout, 160 );
cout << "</table>";
}
// mismatched phrases
if (m_mismatch.size() > 0) {
cout << "<p class=\"pp_singleton_header\">mismatched"
<< " (" << (m_mismatch.size()) << ")</p>";
cout << "<table align=\"center\">";
int step_size = 1;
if (m_mismatch.size() > max_mismatch)
step_size = (m_mismatch.size()+max_mismatch-1) / max_mismatch;
for(size_t i=0; i<m_mismatch.size(); i+=step_size)
m_mismatch[i]->PrintClippedHTML( &cout, 160 );
cout << "</table>";
}
}

View File

@ -0,0 +1,46 @@
#pragma once
#include <vector>
#include <string>
class Alignment;
class PhrasePair;
class SuffixArray;
class TargetCorpus;
class Mismatch;
class PhrasePairCollection
{
public:
typedef unsigned int INDEX;
private:
SuffixArray *m_suffixArray;
TargetCorpus *m_targetCorpus;
Alignment *m_alignment;
std::vector<std::vector<PhrasePair*> > m_collection;
std::vector< Mismatch* > m_mismatch, m_unaligned;
int m_size;
int m_max_lookup;
int m_max_translation;
int m_max_example;
// No copying allowed.
PhrasePairCollection(const PhrasePairCollection&);
void operator=(const PhrasePairCollection&);
public:
PhrasePairCollection ( SuffixArray *, TargetCorpus *, Alignment *, int, int );
~PhrasePairCollection ();
int GetCollection( const std::vector<std::string >& sourceString );
void Print(bool pretty) const;
void PrintHTML() const;
};
// sorting helper
struct CompareBySize {
bool operator()(const std::vector<PhrasePair*>& a, const std::vector<PhrasePair*>& b ) const {
return a.size() > b.size();
}
};

511
biconcor/SuffixArray.cpp Normal file
View File

@ -0,0 +1,511 @@
#include "SuffixArray.h"
#include <fstream>
#include <string>
#include <cstdlib>
#include <cstring>
namespace
{
const int LINE_MAX_LENGTH = 10000;
} // namespace
using namespace std;
SuffixArray::SuffixArray()
: m_array(NULL),
m_index(NULL),
m_buffer(NULL),
m_wordInSentence(NULL),
m_sentence(NULL),
m_sentenceLength(NULL),
m_document(NULL),
m_documentName(NULL),
m_documentNameLength(0),
m_documentCount(0),
m_useDocument(false),
m_vcb(),
m_size(0),
m_sentenceCount(0) { }
SuffixArray::~SuffixArray()
{
free(m_array);
free(m_index);
free(m_wordInSentence);
free(m_sentence);
free(m_sentenceLength);
free(m_document);
free(m_documentName);
}
void SuffixArray::Create(const string& fileName )
{
m_vcb.StoreIfNew( "<uNk>" );
m_endOfSentence = m_vcb.StoreIfNew( "<s>" );
ifstream textFile;
char line[LINE_MAX_LENGTH];
// count the number of words first;
textFile.open(fileName.c_str());
if (!textFile) {
cerr << "Error: no such file or directory " << fileName << endl;
exit(1);
}
// first pass through data: get size
istream *fileP = &textFile;
m_size = 0;
m_sentenceCount = 0;
m_documentCount = 0;
while(!fileP->eof()) {
SAFE_GETLINE((*fileP), line, LINE_MAX_LENGTH, '\n');
if (fileP->eof()) break;
if (m_useDocument && ProcessDocumentLine(line,0)) continue;
vector< WORD_ID > words = m_vcb.Tokenize( line );
m_size += words.size() + 1;
m_sentenceCount++;
}
textFile.close();
cerr << m_size << " words (incl. sentence boundaries)" << endl;
if (m_useDocument) {
cerr << m_documentCount << " documents" << endl;
if (m_documentCount == 0) {
cerr << "Error: no documents found, aborting." << endl;
exit(1);
}
}
// allocate memory
m_array = (WORD_ID*) calloc( sizeof( WORD_ID ), m_size );
m_index = (INDEX*) calloc( sizeof( INDEX ), m_size );
m_wordInSentence = (char*) calloc( sizeof( char ), m_size );
m_sentence = (INDEX*) calloc( sizeof( INDEX ), m_size );
m_sentenceLength = (char*) calloc( sizeof( char ), m_sentenceCount );
CheckAllocation(m_array != NULL, "m_array");
CheckAllocation(m_index != NULL, "m_index");
CheckAllocation(m_wordInSentence != NULL, "m_wordInSentence");
CheckAllocation(m_sentence != NULL, "m_sentence");
CheckAllocation(m_sentenceLength != NULL, "m_sentenceLength");
if (m_useDocument) {
m_document = (INDEX*) calloc( sizeof( INDEX ), m_documentCount );
m_documentName = (INDEX*) calloc( sizeof( INDEX ), m_documentCount );
m_documentNameBuffer = (char*) calloc( sizeof( char ), m_documentNameLength );
CheckAllocation(m_document != NULL, "m_document");
CheckAllocation(m_documentName != NULL, "m_documentName");
CheckAllocation(m_documentNameBuffer != NULL, "m_documentNameBuffer");
}
// second pass through data: fill the arrays
int wordIndex = 0;
int sentenceId = 0;
m_documentNameLength = 0; // re-use as counter
m_documentCount = 0; // re-use as counter
textFile.open(fileName.c_str());
fileP = &textFile;
while(!fileP->eof()) {
SAFE_GETLINE((*fileP), line, LINE_MAX_LENGTH, '\n');
if (fileP->eof()) break;
if (m_useDocument && ProcessDocumentLine(line,sentenceId)) continue;
vector< WORD_ID > words = m_vcb.Tokenize( line );
vector< WORD_ID >::const_iterator i;
for( i=words.begin(); i!=words.end(); i++) {
m_index[ wordIndex ] = wordIndex;
m_sentence[ wordIndex ] = sentenceId;
m_wordInSentence[ wordIndex ] = i-words.begin();
m_array[ wordIndex++ ] = *i;
}
m_index[ wordIndex ] = wordIndex;
m_array[ wordIndex++ ] = m_endOfSentence;
m_sentenceLength[ sentenceId++ ] = words.size();
}
textFile.close();
cerr << "done reading " << wordIndex << " words, " << sentenceId << " sentences." << endl;
// List(0,9);
// sort
m_buffer = (INDEX*) calloc( sizeof( INDEX ), m_size );
if (m_buffer == NULL) {
cerr << "Error: cannot allocate memory to m_buffer" << endl;
exit(1);
}
Sort( 0, m_size-1 );
free( m_buffer );
cerr << "done sorting" << endl;
}
// very specific code to deal with common crawl document ids
bool SuffixArray::ProcessDocumentLine( const char *line, const size_t sentenceId )
{
size_t i;
// first 32 characters are hex-hash
for(i=0; i<32; i++) {
if ((line[i] < '0' || line[i] > '9') && (line[i] < 'a' || line[i] > 'f')) {
return false;
}
}
if (line[i++] != ' ') return false;
// second token is float
for (; line[i] != ' ' && line[i] != 0; i++) {
if (line[i] != '.' && (line[i] < '0' || line[i] > '9')) {
return false;
}
}
i++;
// last token is url (=name)
size_t startName = i;
for (; line[i] != ' ' && line[i] != 0; i++) {}
if (line[i] == ' ') return false;
size_t endName = i+1; // include '\0'
// second pass: record name and sentence number
if (m_document != NULL) {
m_documentName[m_documentCount] = m_documentNameLength;
for(size_t i=startName; i<endName; i++) {
m_documentNameBuffer[m_documentNameLength + i-startName] = line[i];
}
m_document[m_documentCount] = sentenceId;
}
m_documentNameLength += endName-startName;
m_documentCount++;
return true;
}
// good ol' quick sort
void SuffixArray::Sort(INDEX start, INDEX end)
{
if (start == end) return;
INDEX mid = (start+end+1)/2;
Sort( start, mid-1 );
Sort( mid, end );
// merge
INDEX i = start;
INDEX j = mid;
INDEX k = 0;
INDEX length = end-start+1;
while( k<length ) {
if (i == mid ) {
m_buffer[ k++ ] = m_index[ j++ ];
} else if (j > end ) {
m_buffer[ k++ ] = m_index[ i++ ];
} else {
if (CompareIndex( m_index[i], m_index[j] ) < 0) {
m_buffer[ k++ ] = m_index[ i++ ];
} else {
m_buffer[ k++ ] = m_index[ j++ ];
}
}
}
memcpy( ((char*)m_index) + sizeof( INDEX ) * start,
((char*)m_buffer), sizeof( INDEX ) * (end-start+1) );
}
int SuffixArray::CompareIndex( INDEX a, INDEX b ) const
{
// skip over identical words
INDEX offset = 0;
while( a+offset < m_size &&
b+offset < m_size &&
m_array[ a+offset ] == m_array[ b+offset ] ) {
offset++;
}
if( a+offset == m_size ) return -1;
if( b+offset == m_size ) return 1;
return CompareWord( m_array[ a+offset ], m_array[ b+offset ] );
}
inline int SuffixArray::CompareWord( WORD_ID a, WORD_ID b ) const
{
return m_vcb.GetWord(a).compare( m_vcb.GetWord(b) );
}
int SuffixArray::Count( const vector< WORD > &phrase )
{
INDEX dummy;
return LimitedCount( phrase, m_size, dummy, dummy, 0, m_size-1 );
}
bool SuffixArray::MinCount( const vector< WORD > &phrase, INDEX min )
{
INDEX dummy;
return (INDEX)LimitedCount( phrase, min, dummy, dummy, 0, m_size-1 ) >= min;
}
bool SuffixArray::Exists( const vector< WORD > &phrase )
{
INDEX dummy;
return LimitedCount( phrase, 1, dummy, dummy, 0, m_size-1 ) == 1;
}
int SuffixArray::FindMatches( const vector< WORD > &phrase, INDEX &firstMatch, INDEX &lastMatch, INDEX search_start, INDEX search_end )
{
return LimitedCount( phrase, m_size, firstMatch, lastMatch, search_start, search_end );
}
int SuffixArray::LimitedCount( const vector< WORD > &phrase, INDEX min, INDEX &firstMatch, INDEX &lastMatch, INDEX search_start, INDEX search_end )
{
// cerr << "FindFirst\n";
INDEX start = search_start;
INDEX end = (search_end == (INDEX)-1) ? (m_size-1) : search_end;
INDEX mid = FindFirst( phrase, start, end );
// cerr << "done\n";
if (mid == m_size) return 0; // no matches
if (min == 1) return 1; // only existance check
int matchCount = 1;
//cerr << "before...\n";
firstMatch = FindLast( phrase, mid, start, -1 );
matchCount += mid - firstMatch;
//cerr << "after...\n";
lastMatch = FindLast( phrase, mid, end, 1 );
matchCount += lastMatch - mid;
return matchCount;
}
SuffixArray::INDEX SuffixArray::FindLast( const vector< WORD > &phrase, INDEX start, INDEX end, int direction )
{
end += direction;
while(true) {
INDEX mid = ( start + end + (direction>0 ? 0 : 1) )/2;
int match = Match( phrase, mid );
int matchNext = Match( phrase, mid+direction );
//cerr << "\t" << start << ";" << mid << ";" << end << " -> " << match << "," << matchNext << endl;
if (match == 0 && matchNext != 0) return mid;
if (match == 0) // mid point is a match
start = mid;
else
end = mid;
}
}
SuffixArray::INDEX SuffixArray::FindFirst( const vector< WORD > &phrase, INDEX &start, INDEX &end )
{
while(true) {
INDEX mid = ( start + end + 1 )/2;
//cerr << "FindFirst(" << start << ";" << mid << ";" << end << ")\n";
int match = Match( phrase, mid );
if (match == 0) return mid;
if (start >= end && match != 0 ) return m_size;
if (match > 0)
start = mid+1;
else
end = mid-1;
}
}
int SuffixArray::Match( const vector< WORD > &phrase, INDEX index )
{
INDEX pos = m_index[ index ];
for(INDEX i=0; i<phrase.size() && i+pos<m_size; i++) {
int match = CompareWord( m_vcb.GetWordID( phrase[i] ), m_array[ pos+i ] );
// cerr << "{" << index << "+" << i << "," << pos+i << ":" << match << "}" << endl;
if (match != 0)
return match;
}
return 0;
}
void SuffixArray::List(INDEX start, INDEX end)
{
for(INDEX i=start; i<=end; i++) {
INDEX pos = m_index[ i ];
// cerr << i << ":" << pos << "\t";
for(int j=0; j<5 && j+pos<m_size; j++) {
cout << " " << m_vcb.GetWord( m_array[ pos+j ] );
}
// cerr << "\n";
}
}
void SuffixArray::PrintSentenceMatches( const std::vector< WORD > &phrase )
{
cout << "QUERY\t";
for(size_t i=0; i<phrase.size(); i++) {
if (i>0) cout << " ";
cout << phrase[i];
}
cout << '\t';
INDEX start = 0;
INDEX end = m_size-1;
INDEX mid = FindFirst( phrase, start, end );
if (mid == m_size) { // no matches
cout << "0 matches" << endl;
return;
}
INDEX firstMatch = FindLast( phrase, mid, start, -1 );
INDEX lastMatch = FindLast( phrase, mid, end, 1 );
// loop through all matches
cout << (lastMatch-firstMatch+1) << " matches" << endl;
for(INDEX i=firstMatch; i<=lastMatch; i++) {
// get sentence information
INDEX pos = GetPosition( i );
INDEX start = pos - GetWordInSentence( pos );
char length = GetSentenceLength( GetSentence( pos ) );
// print document name
if (m_useDocument) {
INDEX sentence = GetSentence( pos );
INDEX document = GetDocument( sentence );
PrintDocumentName( document );
cout << '\t';
}
// print sentence
for(char i=0; i<length; i++) {
if (i>0) cout << " ";
cout << GetWord( start + i );
}
cout << endl;
}
}
SuffixArray::INDEX SuffixArray::GetDocument( INDEX sentence ) const
{
// binary search
INDEX min = 0;
INDEX max = m_documentCount-1;
if (sentence >= m_document[max]) {
return max;
}
while(true) {
INDEX mid = (min + max) / 2;
if (sentence >= m_document[mid] && sentence < m_document[mid+1]) {
return mid;
}
if (sentence < m_document[mid]) {
max = mid-1;
} else {
min = mid+1;
}
}
}
void SuffixArray::Save(const string& fileName ) const
{
FILE *pFile = fopen ( fileName.c_str() , "w" );
if (pFile == NULL) Error("cannot open",fileName);
fwrite( &m_size, sizeof(INDEX), 1, pFile );
fwrite( m_array, sizeof(WORD_ID), m_size, pFile ); // corpus
fwrite( m_index, sizeof(INDEX), m_size, pFile ); // suffix array
fwrite( m_wordInSentence, sizeof(char), m_size, pFile); // word index
fwrite( m_sentence, sizeof(INDEX), m_size, pFile); // sentence index
fwrite( &m_sentenceCount, sizeof(INDEX), 1, pFile );
fwrite( m_sentenceLength, sizeof(char), m_sentenceCount, pFile); // sentence length
char useDocument = m_useDocument; // not sure if that is needed
fwrite( &useDocument, sizeof(char), 1, pFile );
if (m_useDocument) {
fwrite( &m_documentCount, sizeof(INDEX), 1, pFile );
fwrite( m_document, sizeof(INDEX), m_documentCount, pFile );
fwrite( m_documentName, sizeof(INDEX), m_documentCount, pFile );
fwrite( &m_documentNameLength, sizeof(INDEX), 1, pFile );
fwrite( m_documentNameBuffer, sizeof(char), m_documentNameLength, pFile );
}
fclose( pFile );
m_vcb.Save( fileName + ".src-vcb" );
}
void SuffixArray::Load(const string& fileName )
{
FILE *pFile = fopen ( fileName.c_str() , "r" );
if (pFile == NULL) Error("no such file or directory", fileName);
cerr << "loading from " << fileName << endl;
fread( &m_size, sizeof(INDEX), 1, pFile )
|| Error("could not read m_size from", fileName);
cerr << "words in corpus: " << m_size << endl;
m_array = (WORD_ID*) calloc( sizeof( WORD_ID ), m_size );
m_index = (INDEX*) calloc( sizeof( INDEX ), m_size );
m_wordInSentence = (char*) calloc( sizeof( char ), m_size );
m_sentence = (INDEX*) calloc( sizeof( INDEX ), m_size );
CheckAllocation(m_array != NULL, "m_array");
CheckAllocation(m_index != NULL, "m_index");
CheckAllocation(m_wordInSentence != NULL, "m_wordInSentence");
CheckAllocation(m_sentence != NULL, "m_sentence");
fread( m_array, sizeof(WORD_ID), m_size, pFile ) // corpus
|| Error("could not read m_array from", fileName);
fread( m_index, sizeof(INDEX), m_size, pFile ) // suffix array
|| Error("could not read m_index from", fileName);
fread( m_wordInSentence, sizeof(char), m_size, pFile) // word index
|| Error("could not read m_wordInSentence from", fileName);
fread( m_sentence, sizeof(INDEX), m_size, pFile ) // sentence index
|| Error("could not read m_sentence from", fileName);
fread( &m_sentenceCount, sizeof(INDEX), 1, pFile )
|| Error("could not read m_sentenceCount from", fileName);
cerr << "sentences in corpus: " << m_sentenceCount << endl;
m_sentenceLength = (char*) calloc( sizeof( char ), m_sentenceCount );
CheckAllocation(m_sentenceLength != NULL, "m_sentenceLength");
fread( m_sentenceLength, sizeof(char), m_sentenceCount, pFile) // sentence length
|| Error("could not read m_sentenceLength from", fileName);
if (m_useDocument) { // do not read it when you do not need it
char useDocument;
fread( &useDocument, sizeof(char), 1, pFile )
|| Error("could not read m_useDocument from", fileName);
if (!useDocument) {
cerr << "Error: stored suffix array does not have a document index\n";
exit(1);
}
fread( &m_documentCount, sizeof(INDEX), 1, pFile )
|| Error("could not read m_documentCount from", fileName);
m_document = (INDEX*) calloc( sizeof( INDEX ), m_documentCount );
m_documentName = (INDEX*) calloc( sizeof( INDEX ), m_documentCount );
CheckAllocation(m_document != NULL, "m_document");
CheckAllocation(m_documentName != NULL, "m_documentName");
fread( m_document, sizeof(INDEX), m_documentCount, pFile )
|| Error("could not read m_document from", fileName);
fread( m_documentName, sizeof(INDEX), m_documentCount, pFile )
|| Error("could not read m_documentName from", fileName);
fread( &m_documentNameLength, sizeof(INDEX), 1, pFile )
|| Error("could not read m_documentNameLength from", fileName);
m_documentNameBuffer = (char*) calloc( sizeof( char ), m_documentNameLength );
CheckAllocation(m_documentNameBuffer != NULL, "m_documentNameBuffer");
fread( m_documentNameBuffer, sizeof(char), m_documentNameLength, pFile )
|| Error("could not read m_document from", fileName);
}
fclose( pFile );
m_vcb.Load( fileName + ".src-vcb" );
}
void SuffixArray::CheckAllocation( bool check, const char *dataStructure ) const
{
if (check) return;
cerr << "Error: could not allocate memory for " << dataStructure << endl;
exit(1);
}
bool SuffixArray::Error( const char *message, const string &fileName) const
{
cerr << "Error: " << message << " " << fileName << endl;
exit(1);
return true; // yeah, i know.
}

82
biconcor/SuffixArray.h Normal file
View File

@ -0,0 +1,82 @@
#pragma once
#include "Vocabulary.h"
class SuffixArray
{
public:
typedef unsigned int INDEX;
private:
WORD_ID *m_array;
INDEX *m_index;
INDEX *m_buffer;
char *m_wordInSentence;
INDEX *m_sentence;
char *m_sentenceLength;
WORD_ID m_endOfSentence;
INDEX *m_document;
INDEX *m_documentName;
char *m_documentNameBuffer;
size_t m_documentNameLength;
size_t m_documentCount;
bool m_useDocument;
Vocabulary m_vcb;
INDEX m_size;
INDEX m_sentenceCount;
// No copying allowed.
SuffixArray(const SuffixArray&);
void operator=(const SuffixArray&);
public:
SuffixArray();
~SuffixArray();
void Create(const std::string& fileName );
bool ProcessDocumentLine( const char* const, const size_t );
void Sort(INDEX start, INDEX end);
int CompareIndex( INDEX a, INDEX b ) const;
inline int CompareWord( WORD_ID a, WORD_ID b ) const;
int Count( const std::vector< WORD > &phrase );
bool MinCount( const std::vector< WORD > &phrase, INDEX min );
bool Exists( const std::vector< WORD > &phrase );
int FindMatches( const std::vector< WORD > &phrase, INDEX &firstMatch, INDEX &lastMatch, INDEX search_start = 0, INDEX search_end = -1 );
int LimitedCount( const std::vector< WORD > &phrase, INDEX min, INDEX &firstMatch, INDEX &lastMatch, INDEX search_start = -1, INDEX search_end = 0 );
INDEX FindFirst( const std::vector< WORD > &phrase, INDEX &start, INDEX &end );
INDEX FindLast( const std::vector< WORD > &phrase, INDEX start, INDEX end, int direction );
int Match( const std::vector< WORD > &phrase, INDEX index );
void List( INDEX start, INDEX end );
void PrintSentenceMatches( const std::vector< WORD > &phrase );
inline INDEX GetPosition( INDEX index ) const {
return m_index[ index ];
}
inline INDEX GetSentence( INDEX position ) const {
return m_sentence[position];
}
inline char GetWordInSentence( INDEX position ) const {
return m_wordInSentence[position];
}
inline char GetSentenceLength( INDEX sentenceId ) const {
return m_sentenceLength[sentenceId];
}
inline INDEX GetSize() const {
return m_size;
}
inline WORD GetWord( INDEX position ) const {
return m_vcb.GetWord( m_array[position] );
}
void UseDocument() {
m_useDocument = true;
}
INDEX GetDocument( INDEX sentence ) const;
void PrintDocumentName( INDEX document ) {
for(INDEX i=m_documentName[ document ]; m_documentNameBuffer[i] != 0; i++) {
std::cout << m_documentNameBuffer[ i ];
}
}
void Save(const std::string& fileName ) const;
void Load(const std::string& fileName );
void CheckAllocation(bool, const char *dataStructure) const;
bool Error( const char* message, const std::string& fileName) const;
};

173
biconcor/TargetCorpus.cpp Normal file
View File

@ -0,0 +1,173 @@
#include "TargetCorpus.h"
#include <fstream>
#include <string>
#include <cstdlib>
#include <cstring>
namespace
{
const int LINE_MAX_LENGTH = 10000;
} // namespace
using namespace std;
TargetCorpus::TargetCorpus()
: m_array(NULL),
m_sentenceEnd(NULL),
m_vcb(),
m_size(0),
m_sentenceCount(0) {}
TargetCorpus::~TargetCorpus()
{
free(m_array);
free(m_sentenceEnd);
}
void TargetCorpus::Create(const string& fileName )
{
ifstream textFile;
char line[LINE_MAX_LENGTH];
// count the number of words first;
textFile.open(fileName.c_str());
if (!textFile) {
cerr << "no such file or directory " << fileName << endl;
exit(1);
}
istream *fileP = &textFile;
m_size = 0;
m_sentenceCount = 0;
while(!fileP->eof()) {
SAFE_GETLINE((*fileP), line, LINE_MAX_LENGTH, '\n');
if (fileP->eof()) break;
vector< WORD_ID > words = m_vcb.Tokenize( line );
m_size += words.size();
m_sentenceCount++;
}
textFile.close();
cerr << m_size << " words" << endl;
// allocate memory
m_array = (WORD_ID*) calloc( sizeof( WORD_ID ), m_size );
m_sentenceEnd = (INDEX*) calloc( sizeof( INDEX ), m_sentenceCount );
if (m_array == NULL) {
cerr << "cannot allocate memory to m_array" << endl;
exit(1);
}
if (m_sentenceEnd == NULL) {
cerr << "cannot allocate memory to m_sentenceEnd" << endl;
exit(1);
}
// fill the array
int wordIndex = 0;
int sentenceId = 0;
textFile.open(fileName.c_str());
if (!textFile) {
cerr << "no such file or directory " << fileName << endl;
exit(1);
}
fileP = &textFile;
while(!fileP->eof()) {
SAFE_GETLINE((*fileP), line, LINE_MAX_LENGTH, '\n');
if (fileP->eof()) break;
vector< WORD_ID > words = m_vcb.Tokenize( line );
vector< WORD_ID >::const_iterator i;
for( i=words.begin(); i!=words.end(); i++) {
m_array[ wordIndex++ ] = *i;
}
m_sentenceEnd[ sentenceId++ ] = wordIndex-1;
}
textFile.close();
cerr << "done reading " << wordIndex << " words, " << sentenceId << " sentences." << endl;
}
WORD TargetCorpus::GetWordFromId( const WORD_ID id ) const
{
return m_vcb.GetWord( id );
}
WORD TargetCorpus::GetWord( INDEX sentence, int word ) const
{
return m_vcb.GetWord( GetWordId( sentence, word ) );
}
WORD_ID TargetCorpus::GetWordId( INDEX sentence, int word ) const
{
if (sentence == 0) {
return m_array[ word ];
}
return m_array[ m_sentenceEnd[ sentence-1 ] + 1 + word ] ;
}
char TargetCorpus::GetSentenceLength( INDEX sentence ) const
{
if (sentence == 0) {
return (char) m_sentenceEnd[ 0 ]+1;
}
return (char) ( m_sentenceEnd[ sentence ] - m_sentenceEnd[ sentence-1 ] );
}
void TargetCorpus::Save(const string& fileName ) const
{
FILE *pFile = fopen ( (fileName + ".tgt").c_str() , "w" );
if (pFile == NULL) {
cerr << "Cannot open " << fileName << endl;
exit(1);
}
fwrite( &m_size, sizeof(INDEX), 1, pFile );
fwrite( m_array, sizeof(WORD_ID), m_size, pFile ); // corpus
fwrite( &m_sentenceCount, sizeof(INDEX), 1, pFile );
fwrite( m_sentenceEnd, sizeof(INDEX), m_sentenceCount, pFile); // sentence index
fclose( pFile );
m_vcb.Save( fileName + ".tgt-vcb" );
}
void TargetCorpus::Load(const string& fileName )
{
FILE *pFile = fopen ( (fileName + ".tgt").c_str() , "r" );
if (pFile == NULL) {
cerr << "Cannot open " << fileName << endl;
exit(1);
}
cerr << "loading from " << fileName << ".tgt" << endl;
fread( &m_size, sizeof(INDEX), 1, pFile );
cerr << "words in corpus: " << m_size << endl;
m_array = (WORD_ID*) calloc( sizeof(WORD_ID), m_size );
if (m_array == NULL) {
cerr << "cannot allocate memory to m_array" << endl;
exit(1);
}
fread( m_array, sizeof(WORD_ID), m_size, pFile ); // corpus
fread( &m_sentenceCount, sizeof(INDEX), 1, pFile );
cerr << "sentences in corpus: " << m_sentenceCount << endl;
m_sentenceEnd = (INDEX*) calloc( sizeof(INDEX), m_sentenceCount );
if (m_sentenceEnd == NULL) {
cerr << "cannot allocate memory to m_sentenceEnd" << endl;
exit(1);
}
fread( m_sentenceEnd, sizeof(INDEX), m_sentenceCount, pFile); // sentence index
fclose( pFile );
m_vcb.Load( fileName + ".tgt-vcb" );
}

32
biconcor/TargetCorpus.h Normal file
View File

@ -0,0 +1,32 @@
#pragma once
#include "Vocabulary.h"
class TargetCorpus
{
public:
typedef unsigned int INDEX;
private:
WORD_ID *m_array;
INDEX *m_sentenceEnd;
Vocabulary m_vcb;
INDEX m_size;
INDEX m_sentenceCount;
// No copying allowed.
TargetCorpus(const TargetCorpus&);
void operator=(const TargetCorpus&);
public:
TargetCorpus();
~TargetCorpus();
void Create(const std::string& fileName );
WORD GetWordFromId( const WORD_ID id ) const;
WORD GetWord( INDEX sentence, int word ) const;
WORD_ID GetWordId( INDEX sentence, int word ) const;
char GetSentenceLength( INDEX sentence ) const;
void Load(const std::string& fileName );
void Save(const std::string& fileName ) const;
};

101
biconcor/Vocabulary.cpp Normal file
View File

@ -0,0 +1,101 @@
// $Id: Vocabulary.cpp 1565 2008-02-22 14:42:01Z bojar $
#include "Vocabulary.h"
#include <fstream>
namespace
{
const int MAX_LENGTH = 10000;
} // namespace
using namespace std;
// as in beamdecoder/tables.cpp
vector<WORD_ID> Vocabulary::Tokenize( const char input[] )
{
vector< WORD_ID > token;
bool betweenWords = true;
int start=0;
int i=0;
for(; input[i] != '\0'; i++) {
bool isSpace = (input[i] == ' ' || input[i] == '\t');
if (!isSpace && betweenWords) {
start = i;
betweenWords = false;
} else if (isSpace && !betweenWords) {
token.push_back( StoreIfNew ( string( input+start, i-start ) ) );
betweenWords = true;
}
}
if (!betweenWords)
token.push_back( StoreIfNew ( string( input+start, i-start ) ) );
return token;
}
WORD_ID Vocabulary::StoreIfNew( const WORD& word )
{
map<WORD, WORD_ID>::iterator i = lookup.find( word );
if( i != lookup.end() )
return i->second;
WORD_ID id = vocab.size();
vocab.push_back( word );
lookup[ word ] = id;
return id;
}
WORD_ID Vocabulary::GetWordID( const WORD &word ) const
{
map<WORD, WORD_ID>::const_iterator i = lookup.find( word );
if( i == lookup.end() )
return 0;
WORD_ID w= (WORD_ID) i->second;
return w;
}
void Vocabulary::Save(const string& fileName ) const
{
ofstream vcbFile;
vcbFile.open( fileName.c_str(), ios::out | ios::ate | ios::trunc);
if (!vcbFile) {
cerr << "Failed to open " << fileName << endl;
exit(1);
}
vector< WORD >::const_iterator i;
for(i = vocab.begin(); i != vocab.end(); i++) {
const string &word = *i;
vcbFile << word << endl;
}
vcbFile.close();
}
void Vocabulary::Load(const string& fileName )
{
ifstream vcbFile;
char line[MAX_LENGTH];
vcbFile.open(fileName.c_str());
if (!vcbFile) {
cerr << "no such file or directory: " << fileName << endl;
exit(1);
}
cerr << "loading from " << fileName << endl;
istream *fileP = &vcbFile;
int count = 0;
while(!fileP->eof()) {
SAFE_GETLINE((*fileP), line, MAX_LENGTH, '\n');
if (fileP->eof()) break;
int length = 0;
for(; line[length] != '\0'; length++);
StoreIfNew( string( line, length ) );
count++;
}
vcbFile.close();
cerr << count << " word read, vocabulary size " << vocab.size() << endl;
}

39
biconcor/Vocabulary.h Normal file
View File

@ -0,0 +1,39 @@
// $Id: tables-core.h 1470 2007-10-02 21:43:54Z redpony $
#pragma once
#include <iostream>
#include <cstdlib>
#include <string>
#include <map>
#include <vector>
#define SAFE_GETLINE(_IS, _LINE, _SIZE, _DELIM) { \
_IS.getline(_LINE, _SIZE, _DELIM); \
if(_IS.fail() && !_IS.bad() && !_IS.eof()) _IS.clear(); \
if (_IS.gcount() == _SIZE-1) { \
std::cerr << "Line too long! Buffer overflow. Delete lines >=" \
<< _SIZE << " chars or raise MAX_LENGTH in phrase-extract/tables-core.cpp" \
<< std::endl; \
std::exit(1); \
} \
}
typedef std::string WORD;
typedef unsigned int WORD_ID;
class Vocabulary
{
public:
std::map<WORD, WORD_ID> lookup;
std::vector< WORD > vocab;
WORD_ID StoreIfNew( const WORD& );
WORD_ID GetWordID( const WORD& ) const;
std::vector<WORD_ID> Tokenize( const char[] );
inline WORD &GetWord( WORD_ID id ) const {
WORD &i = (WORD&) vocab[ id ];
return i;
}
void Save(const std::string& fileName ) const;
void Load(const std::string& fileName );
};

126
biconcor/base64.cpp Normal file
View File

@ -0,0 +1,126 @@
/*
base64.cpp and base64.h
Copyright (C) 2004-2008 René Nyffenegger
This source code is provided 'as-is', without any express or implied
warranty. In no event will the author be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this source code must not be misrepresented; you must not
claim that you wrote the original source code. If you use this source code
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original source code.
3. This notice may not be removed or altered from any source distribution.
René Nyffenegger rene.nyffenegger@adp-gmbh.ch
*/
#include "base64.h"
#include <iostream>
static const std::string base64_chars =
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"abcdefghijklmnopqrstuvwxyz"
"0123456789+/";
static inline bool is_base64(unsigned char c)
{
return (isalnum(c) || (c == '+') || (c == '/'));
}
std::string base64_encode(unsigned char const* bytes_to_encode, unsigned int in_len)
{
std::string ret;
int i = 0;
int j = 0;
unsigned char char_array_3[3];
unsigned char char_array_4[4];
while (in_len--) {
char_array_3[i++] = *(bytes_to_encode++);
if (i == 3) {
char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;
char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);
char_array_4[3] = char_array_3[2] & 0x3f;
for(i = 0; (i <4) ; i++)
ret += base64_chars[char_array_4[i]];
i = 0;
}
}
if (i) {
for(j = i; j < 3; j++)
char_array_3[j] = '\0';
char_array_4[0] = (char_array_3[0] & 0xfc) >> 2;
char_array_4[1] = ((char_array_3[0] & 0x03) << 4) + ((char_array_3[1] & 0xf0) >> 4);
char_array_4[2] = ((char_array_3[1] & 0x0f) << 2) + ((char_array_3[2] & 0xc0) >> 6);
char_array_4[3] = char_array_3[2] & 0x3f;
for (j = 0; (j < i + 1); j++)
ret += base64_chars[char_array_4[j]];
while((i++ < 3))
ret += '=';
}
return ret;
}
std::string base64_decode(std::string const& encoded_string)
{
int in_len = encoded_string.size();
int i = 0;
int j = 0;
int in_ = 0;
unsigned char char_array_4[4], char_array_3[3];
std::string ret;
while (in_len-- && ( encoded_string[in_] != '=') && is_base64(encoded_string[in_])) {
char_array_4[i++] = encoded_string[in_];
in_++;
if (i ==4) {
for (i = 0; i <4; i++)
char_array_4[i] = base64_chars.find(char_array_4[i]);
char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);
char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);
char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3];
for (i = 0; (i < 3); i++)
ret += char_array_3[i];
i = 0;
}
}
if (i) {
for (j = i; j <4; j++)
char_array_4[j] = 0;
for (j = 0; j <4; j++)
char_array_4[j] = base64_chars.find(char_array_4[j]);
char_array_3[0] = (char_array_4[0] << 2) + ((char_array_4[1] & 0x30) >> 4);
char_array_3[1] = ((char_array_4[1] & 0xf) << 4) + ((char_array_4[2] & 0x3c) >> 2);
char_array_3[2] = ((char_array_4[2] & 0x3) << 6) + char_array_4[3];
for (j = 0; (j < i - 1); j++) ret += char_array_3[j];
}
return ret;
}

6
biconcor/base64.h Normal file
View File

@ -0,0 +1,6 @@
#pragma once
#include <string>
std::string base64_encode(unsigned char const* , unsigned int len);
std::string base64_decode(std::string const& s);

171
biconcor/biconcor.cpp Normal file
View File

@ -0,0 +1,171 @@
#include "SuffixArray.h"
#include "TargetCorpus.h"
#include "Alignment.h"
#include "PhrasePairCollection.h"
#include <getopt.h>
#include "base64.h"
using namespace std;
int main(int argc, char* argv[])
{
// handle parameters
string query;
string fileNameSuffix;
string fileNameSource;
string fileNameTarget = "";
string fileNameAlignment = "";
int loadFlag = false;
int saveFlag = false;
int createFlag = false;
int queryFlag = false;
int htmlFlag = false; // output as HTML
int prettyFlag = false; // output readable on screen
int stdioFlag = false; // receive requests from STDIN, respond to STDOUT
int max_translation = 20;
int max_example = 50;
string info = "usage: biconcor\n\t[--load model-file]\n\t[--save model-file]\n\t[--create source-corpus]\n\t[--query string]\n\t[--target target-corpus]\n\t[--alignment file]\n\t[--translations count]\n\t[--examples count]\n\t[--html]\n\t[--stdio]\n";
while(1) {
static struct option long_options[] = {
{"load", required_argument, 0, 'l'},
{"save", required_argument, 0, 's'},
{"create", required_argument, 0, 'c'},
{"query", required_argument, 0, 'q'},
{"target", required_argument, 0, 't'},
{"alignment", required_argument, 0, 'a'},
{"html", no_argument, 0, 'h'},
{"pretty", no_argument, 0, 'p'},
{"stdio", no_argument, 0, 'i'},
{"translations", required_argument, 0, 'o'},
{"examples", required_argument, 0, 'e'},
{0, 0, 0, 0}
};
int option_index = 0;
int c = getopt_long (argc, argv, "l:s:c:q:Q:t:a:hpio:e:", long_options, &option_index);
if (c == -1) break;
switch (c) {
case 'l':
fileNameSuffix = string(optarg);
loadFlag = true;
break;
case 't':
fileNameTarget = string(optarg);
break;
case 'a':
fileNameAlignment = string(optarg);
break;
case 's':
fileNameSuffix = string(optarg);
saveFlag = true;
break;
case 'c':
fileNameSource = string(optarg);
createFlag = true;
break;
case 'Q':
query = base64_decode(string(optarg));
queryFlag = true;
break;
case 'q':
query = string(optarg);
queryFlag = true;
break;
case 'o':
max_translation = atoi(optarg);
break;
case 'e':
max_example = atoi(optarg);
break;
case 'p':
prettyFlag = true;
break;
case 'h':
htmlFlag = true;
break;
case 'i':
stdioFlag = true;
break;
default:
cerr << info;
exit(1);
}
}
if (stdioFlag) {
queryFlag = true;
}
// check if parameter settings are legal
if (saveFlag && !createFlag) {
cerr << "error: cannot save without creating\n" << info;
exit(1);
}
if (saveFlag && loadFlag) {
cerr << "error: cannot load and save at the same time\n" << info;
exit(1);
}
if (!loadFlag && !createFlag) {
cerr << "error: neither load or create - i have no info!\n" << info;
exit(1);
}
if (createFlag && (fileNameTarget == "" || fileNameAlignment == "")) {
cerr << "error: i have no target corpus or alignment\n" << info;
exit(1);
}
// do your thing
SuffixArray suffixArray;
TargetCorpus targetCorpus;
Alignment alignment;
if (createFlag) {
cerr << "will create\n";
cerr << "source corpus is in " << fileNameSource << endl;
suffixArray.Create( fileNameSource );
cerr << "target corpus is in " << fileNameTarget << endl;
targetCorpus.Create( fileNameTarget );
cerr << "alignment is in " << fileNameAlignment << endl;
alignment.Create( fileNameAlignment );
if (saveFlag) {
suffixArray.Save( fileNameSuffix );
targetCorpus.Save( fileNameSuffix );
alignment.Save( fileNameSuffix );
cerr << "will save in " << fileNameSuffix << endl;
}
}
if (loadFlag) {
cerr << "will load from " << fileNameSuffix << endl;
suffixArray.Load( fileNameSuffix );
targetCorpus.Load( fileNameSuffix );
alignment.Load( fileNameSuffix );
}
if (stdioFlag) {
cout << "-|||- BICONCOR START -|||-" << endl << flush;
while(true) {
string query;
if (getline(cin, query, '\n').eof()) {
return 0;
}
vector< string > queryString = alignment.Tokenize( query.c_str() );
PhrasePairCollection ppCollection( &suffixArray, &targetCorpus, &alignment, max_translation, max_example );
int total = ppCollection.GetCollection( queryString );
cout << "TOTAL: " << total << endl;
if (htmlFlag) {
ppCollection.PrintHTML();
} else {
ppCollection.Print(prettyFlag);
}
cout << "-|||- BICONCOR END -|||-" << endl << flush;
}
} else if (queryFlag) {
cerr << "query is " << query << endl;
vector< string > queryString = alignment.Tokenize( query.c_str() );
PhrasePairCollection ppCollection( &suffixArray, &targetCorpus, &alignment, max_translation, max_example );
ppCollection.GetCollection( queryString );
if (htmlFlag) {
ppCollection.PrintHTML();
} else {
ppCollection.Print(prettyFlag);
}
}
return 0;
}

134
biconcor/phrase-lookup.cpp Normal file
View File

@ -0,0 +1,134 @@
#include "SuffixArray.h"
#include "../util/tokenize.hh"
#include <getopt.h>
using namespace std;
size_t lookup( string );
vector<string> tokenize( const char input[] );
SuffixArray suffixArray;
int main(int argc, char* argv[])
{
// handle parameters
string query;
string fileNameSuffix;
string fileNameSource;
bool loadFlag = false;
bool saveFlag = false;
bool createFlag = false;
bool queryFlag = false;
bool querySentenceFlag = false;
int stdioFlag = false; // receive requests from STDIN, respond to STDOUT
string info = "usage: biconcor\n\t[--load model-file]\n\t[--save model-file]\n\t[--create corpus]\n\t[--query string]\n\t[--stdio]\n";
while(1) {
static struct option long_options[] = {
{"load", required_argument, 0, 'l'},
{"save", required_argument, 0, 's'},
{"create", required_argument, 0, 'c'},
{"query", required_argument, 0, 'q'},
{"query-sentence", required_argument, 0, 'Q'},
{"document", required_argument, 0, 'd'},
{"stdio", no_argument, 0, 'i'},
{"stdio-sentence", no_argument, 0, 'I'},
{0, 0, 0, 0}
};
int option_index = 0;
int c = getopt_long (argc, argv, "l:s:c:q:Q:iId", long_options, &option_index);
if (c == -1) break;
switch (c) {
case 'l':
fileNameSuffix = string(optarg);
loadFlag = true;
break;
case 's':
fileNameSuffix = string(optarg);
saveFlag = true;
break;
case 'c':
fileNameSource = string(optarg);
createFlag = true;
break;
case 'q':
query = string(optarg);
queryFlag = true;
break;
case 'Q':
query = string(optarg);
querySentenceFlag = true;
break;
case 'i':
stdioFlag = true;
break;
case 'I':
stdioFlag = true;
querySentenceFlag = true;
break;
case 'd':
suffixArray.UseDocument();
break;
default:
cerr << info;
exit(1);
}
}
// check if parameter settings are legal
if (saveFlag && !createFlag) {
cerr << "error: cannot save without creating\n" << info;
exit(1);
}
if (saveFlag && loadFlag) {
cerr << "error: cannot load and save at the same time\n" << info;
exit(1);
}
if (!loadFlag && !createFlag) {
cerr << "error: neither load or create - i have no info!\n" << info;
exit(1);
}
// get suffix array
if (createFlag) {
cerr << "will create\n";
cerr << "corpus is in " << fileNameSource << endl;
suffixArray.Create( fileNameSource );
if (saveFlag) {
suffixArray.Save( fileNameSuffix );
cerr << "will save in " << fileNameSuffix << endl;
}
}
if (loadFlag) {
cerr << "will load from " << fileNameSuffix << endl;
suffixArray.Load( fileNameSuffix );
}
// do something with it
if (stdioFlag) {
while(true) {
string query;
if (getline(cin, query, '\n').eof()) {
return 0;
}
if (querySentenceFlag) {
vector< string > queryString = util::tokenize( query.c_str() );
suffixArray.PrintSentenceMatches( queryString );
} else {
cout << lookup( query ) << endl;
}
}
} else if (queryFlag) {
cout << lookup( query ) << endl;
} else if (querySentenceFlag) {
vector< string > queryString = util::tokenize( query.c_str() );
suffixArray.PrintSentenceMatches( queryString );
}
return 0;
}
size_t lookup( string query )
{
cerr << "query is " << query << endl;
vector< string > queryString = util::tokenize( query.c_str() );
return suffixArray.Count( queryString );
}

23
bjam Executable file
View File

@ -0,0 +1,23 @@
#!/bin/bash
set -e
top="$(dirname "$0")"
if
bjam="$(which bjam 2>/dev/null)" && #exists
[ ${#bjam} != 0 ] && #paranoia about which printing nothing then returning true
! grep UFIHGUFIHBDJKNCFZXAEVA "${bjam}" </dev/null >/dev/null && #bjam in path isn't this script
"${bjam}" --sanity-test 2>/dev/null |grep Sane >/dev/null && #The test in jam-files/sanity.jam passes
(cd "${top}/jam-files/fail" && ! "${bjam}") >/dev/null #Returns non-zero on failure
then
#Delegate to system bjam
exec "${bjam}" "$@"
fi
if [ ! -x "$top"/jam-files/bjam ] || "$top"/jam-files/bjam -v |grep 2011.4 >/dev/null; then
pushd "$top/jam-files/engine"
./build.sh
cp -f bin.*/bjam ../bjam
popd
fi
export BOOST_BUILD_PATH="$top"/jam-files/boost-build
exec "$top"/jam-files/bjam "$@"

8
compile.sh Executable file
View File

@ -0,0 +1,8 @@
#!/bin/bash
# if not supplied otherwise, this script assumes that all 3rd-party dependencies are installed under ./opt
# you can install all 3rd-party dependencies by running make -f contrib/Makefiles/install-dependencies.gmake
set -e -o pipefail
OPT=${OPT:-$(pwd)/opt}
./bjam --with-irstlm=$OPT/irstlm-5.80.08 --with-boost=$OPT --with-cmph=$OPT --with-xmlrpc-c=$OPT --with-mm --with-probing-pt -j$(getconf _NPROCESSORS_ONLN) $@

290
contrib/DIMwid/DIMputs.py Normal file
View File

@ -0,0 +1,290 @@
# -*- coding: utf-8 -*-
import collections
import re
class DataInput():
def __init__(self, file_name):
self.file = open(file_name, "r")
self.sentences = None
def read_phrase(self):
self.sentences = []
sentence = None
span_reg = re.compile("\|[0-9]+-[0-9]+\|")
previous = ""
for line in self.file:
sentence = Single()
for word in line.split():
if span_reg.match(word):
sentence.spans[tuple([int(i) for i in word.strip("|").split("-")])] = previous.strip()
previous = " "
else:
previous += word + " "
sentence.set_length()
self.sentences.append(sentence)
sentence.number = len(self.sentences)
def read_syntax(self):
self.sentences = []
sentence = None
number = -1
for line in self.file:
if int(line.split()[2]) != number:
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
sentence = Single()
sentence.number = int(line.split()[2])
number = sentence.number
sentence.spans[tuple([int(i) for i in line.split()[3].strip(":[]").split("..")])] \
= line.strip()
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
# = tuple([line.split(":")[1], line.split(":")[2], line.split(":")[3]])
def read_syntax_cubes(self, cell_limit):
self.sentences = []
sentence = None
number = -1
new_item = False
for line in self.file:
if line.startswith("Chart Cell"):
pass # we dont care for those lines
elif line.startswith("---------"):
new_item = True
elif line.startswith("Trans Opt") and new_item is True:
new_item = False
if int(line.split()[2]) != number:
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
sentence = Multiple()
sentence.number = int(line.split()[2])
number = sentence.number
span = tuple([int(i) for i in line.split()[3].strip(":[]").split("..")])
if len(sentence.spans[span]) < cell_limit:
sentence.spans[span].append(line.strip())
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
def read_phrase_stack_flag(self, cell_limit):
self.sentences = []
sentence = None
number = -1
for line in self.file:
if len(line.split()) < 6:
pass
# elif re.match("recombined=[0-9]+", line.split()[6]):
# pass
else:
if int(line.split()[0]) != number:
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
sentence = Multiple()
sentence.number = int(line.split()[0])
number = sentence.number
# span = tuple([int(i) for i in line.split()[8].split("=")[1].split("-")])
span = re.search(r"covered=([0-9]+\-[0-9]+)", line).expand("\g<1>")
# print span.expand("\g<1>")
span = tuple([int(i) for i in span.split("-")])
if len(sentence.spans[span]) < cell_limit:
sentence.spans[span].append(line.strip())
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
def read_phrase_stack_verbose(self, cell_limit):
self.sentences = []
sentence = None
number = -1
span_input = False
for line in self.file:
if line.startswith("Translating: "):
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
number += 1
sentence = Multiple()
sentence.number = number
else:
if re.match("\[[A-Z,a-z,\ ]+;\ [0-9]+-[0-9]+\]", line):
span = tuple([int(i) for i in line.split(";")[1].strip().strip("]").split("-")])
sentence.spans[span].append(line.strip())
span_input = True
# print line,
elif span_input is True:
if line.strip() == "":
span_input = False
# print "X"
else:
if len(sentence.spans[span]) < cell_limit:
sentence.spans[span].append(line.strip())
# print line,
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
def read_syntax_cube_flag(self, cell_limit):
self.sentences = []
sentence = None
number = -1
for line in self.file:
if len(line.split()) < 6:
pass
else:
if int(line.split()[0]) != number:
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
sentence = Multiple() #
sentence.number = int(line.split()[0])
number = sentence.number
span = re.search(r"\[([0-9]+)\.\.([0-9]+)\]", line).expand("\g<1> \g<2>")
span = tuple([int(i) for i in span.split()])
if len(sentence.spans[span]) < cell_limit:
sentence.spans[span].append(line.strip())
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
def read_mbot(self, cell_limit):
self.sentences = []
sentence = None
number = -1
hypo = False
rule = False
popping = False
target = ""
source = ""
source_parent = ""
target_parent = ""
alignment = ""
for line in self.file:
if line.startswith("Translating:"):
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
sentence = Multiple()
sentence.number = number + 1
number = sentence.number
elif line.startswith("POPPING"):
popping = True
elif popping is True:
popping = False
span = tuple([int(i) for i in line.split()[1].strip("[").split("]")[0].split("..")])
hypo = True
elif hypo is True:
if line.startswith("Target Phrases"):
target = line.split(":", 1)[1].strip()
elif line.startswith("Alignment Info"):
alignment = line.split(":", 1)[1].strip()
if alignment == "":
alignment = "(1)"
elif line.startswith("Source Phrase"):
source = line.split(":", 1)[1].strip()
elif line.startswith("Source Left-hand-side"):
source_parent = line.split(":", 1)[1].strip()
elif line.startswith("Target Left-hand-side"):
target_parent = line.split(":", 1)[1].strip()
# Input stored: now begin translation into rule-format
alignment = re.sub(r"\([0-9]+\)", "||", alignment)
align_blocks = alignment.split("||")[:-1]
target = re.sub(r"\([0-9]+\)", "||", target)
target = [x.split() for x in target.split("||")][:-1]
source = source.split()
for i in range(len(source)):
if source[i].isupper():
source[i] = "[" + source[i] + "]"
for k in range(len(align_blocks)):
align_pairs = [tuple([int(y) for y in x.split("-")]) for x in align_blocks[k].split()]
for j in filter(lambda x: x[0] == i, align_pairs):
source[i] = source[i] + "[" + target[k][j[1]] + "]"
for i in range(len(target)):
for j in range(len(target[i])):
align_pairs = [tuple([int(y) for y in x.split("-")]) for x in align_blocks[i].split()]
for k in filter(lambda x: x[1] == j, align_pairs):
target[i][j] = source[k[0]].split("]")[0] + "][" + target[i][j] + "]"
target = " || ".join([" ".join(x) for x in target]) + " ||"
source = " ".join(source)
source = source + " [" + source_parent + "]"
tp = re.sub(r"\([0-9]+\)", "", target_parent).split()
for i in tp:
target = target.replace("||", " [" + i + "] !!", 1)
target = target.replace("!!", "||")
rule = False
search_pattern = "||| " + source + " ||| " + target + "| --- ||| " + alignment + "|"
sentence.spans[span].append(search_pattern)
# print search_pattern, span
if len(sentence.spans[span]) < cell_limit:
sentence.spans[span].append(search_pattern)
else:
pass
if sentence is not None:
sentence.set_length()
self.sentences.append(sentence)
class Single():
def __init__(self):
self.number = None
self.spans = {}
self.length = None
def set_length(self):
self.length = max([x[1] for x in self.spans.keys()])
def __str__(self):
number = str(self.number)
length = str(self.length)
spans = "\n"
for i in self.spans.keys():
spans += str(i) + " - " + str(self.spans[i]) + "\n"
return str((number, length, spans))
class Multiple():
def __init__(self):
self.number = None
self.spans = collections.defaultdict(list)
self.length = None
def set_length(self):
self.length = max([x[1] for x in self.spans.keys()])
def __str__(self):
number = str(self.number)
length = str(self.length)
spans = "\n"
for i in self.spans.keys():
spans += str(i) + " - " + str(self.spans[i]) + "\n"
return str((number, length, spans))

View File

@ -0,0 +1,381 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from PyQt4 import QtCore, QtGui
import DIMputs as my_DI
class MainWindow(QtGui.QWidget):
updateSignal = QtCore.pyqtSignal()
def __init__(self, parent=None):
self.path = ""
self.cur_rein_num = 0
self.data = None
self.format = ""
self.cell_limit = float("inf")
super(MainWindow, self).__init__(parent)
# upper buttons
pathLabel = QtGui.QLabel("Path:")
self.pathLabel = QtGui.QLabel(self.path)
self.pathLabel.setFrameStyle(QtGui.QFrame.StyledPanel |
QtGui.QFrame.Sunken)
self.pathLabel.setToolTip("Current File")
self.pathButton = QtGui.QPushButton("P&ath...")
self.pathButton.setToolTip("Set the item you want to inspect")
self.connect(self.pathButton, QtCore.SIGNAL("clicked()"), self.setPath)
# cell limit label and text field
cell_limit_label = QtGui.QLabel("Cell Limit:")
self.cell_limit_chooser = QtGui.QSpinBox()
self.cell_limit_chooser.setMaximum(99999)
cell_limit_label.setToolTip("Limits the number of elements per cell")
self.cell_limit_chooser.setToolTip("Set to zero to show all elements")
# format drop down menu
self.format_drop = QtGui.QToolButton(self)
self.format_drop.setPopupMode(QtGui.QToolButton.MenuButtonPopup)
self.format_drop.setMenu(QtGui.QMenu(self.format_drop))
self.format_drop.setText("Format")
self.format_syntax = QtGui.QPushButton("Syntax")
self.format_phrase = QtGui.QPushButton("Phrase")
self.format_syntaxCube = QtGui.QPushButton("Syntax Cube (-Tall flag)")
self.format_phraseStackFlag = QtGui.QPushButton("Phrase Stack (search-graph)")
self.format_phraseStackVerbose = QtGui.QPushButton("Phrase Stack (verbose)")
self.format_syntaxCubeFlag = QtGui.QPushButton("Syntax Cube (search-graph)")
self.format_mbot = QtGui.QPushButton("MBOT")
format_action_syntax = QtGui.QWidgetAction(self.format_drop)
format_action_syntax.setDefaultWidget(self.format_syntax)
format_action_phrase = QtGui.QWidgetAction(self.format_drop)
format_action_phrase.setDefaultWidget(self.format_phrase)
format_action_syntaxCube = QtGui.QWidgetAction(self.format_drop)
format_action_syntaxCube.setDefaultWidget(self.format_syntaxCube)
format_action_phraseStackFlag = QtGui.QWidgetAction(self.format_drop)
format_action_phraseStackFlag.setDefaultWidget(self.format_phraseStackFlag)
format_action_phraseStackVerbose = QtGui.QWidgetAction(self.format_drop)
format_action_phraseStackVerbose.setDefaultWidget(self.format_phraseStackVerbose)
format_action_syntaxCubeFlag = QtGui.QWidgetAction(self.format_drop)
format_action_syntaxCubeFlag.setDefaultWidget(self.format_syntaxCubeFlag)
format_action_mbot = QtGui.QWidgetAction(self.format_drop)
format_action_mbot.setDefaultWidget(self.format_mbot)
self.format_drop.menu().addAction(format_action_syntax)
self.format_drop.menu().addAction(format_action_phrase)
self.format_drop.menu().addAction(format_action_syntaxCube)
self.format_drop.menu().addAction(format_action_phraseStackFlag)
self.format_drop.menu().addAction(format_action_phraseStackVerbose)
self.format_drop.menu().addAction(format_action_syntaxCubeFlag)
self.format_drop.menu().addAction(format_action_mbot)
self.format_syntax.clicked.connect(self.set_format_syntax)
self.format_phrase.clicked.connect(self.set_format_phrase)
self.format_syntaxCube.clicked.connect(self.set_format_syntaxCube)
self.format_phraseStackFlag.clicked.connect(self.set_format_phraseStackFlag)
self.format_phraseStackVerbose.clicked.connect(self.set_format_phraseStackVerbose)
self.format_syntaxCubeFlag.clicked.connect(self.set_format_syntaxCubeFlag)
self.format_mbot.clicked.connect(self.set_format_mbot)
# table
self.table_widget = HoverTable(self)
self.w = [] # future popup window
# self.table_widget = QtGui.QTableWidget(self)
# lower buttons
self.buttonBox = QtGui.QDialogButtonBox()
self.sentence_spinbox = QtGui.QSpinBox(parent=self.buttonBox)
self.sentence_spinbox.setMaximum(999999)
self.goto_button = self.buttonBox.addButton(
"&GoTo", QtGui.QDialogButtonBox.ActionRole)
self.next_button = self.buttonBox.addButton(
"&Next", QtGui.QDialogButtonBox.ActionRole)
self.prev_button = self.buttonBox.addButton(
"&Prev", QtGui.QDialogButtonBox.ActionRole)
self.next_button.clicked.connect(self.next_parse)
self.prev_button.clicked.connect(self.prev_parse)
self.goto_button.clicked.connect(self.cur_parse)
self.quit_button = self.buttonBox.addButton(
"&Quit", QtGui.QDialogButtonBox.ActionRole)
self.quit_button.clicked.connect(
QtCore.QCoreApplication.instance().quit)
# Disable navigation buttons until data is loaded: see setPath for reactivation
self.goto_button.setDisabled(True)
self.next_button.setDisabled(True)
self.prev_button.setDisabled(True)
# Layouting
layout = QtGui.QVBoxLayout()
topLayout = QtGui.QHBoxLayout()
topLayout.addWidget(self.format_drop)
topLayout.addWidget(cell_limit_label)
topLayout.addWidget(self.cell_limit_chooser)
self.cell_limit_chooser.valueChanged.connect(self.setCellLimit)
topLayout.addWidget(pathLabel)
topLayout.addWidget(self.pathLabel, 1)
topLayout.addWidget(self.pathButton)
bottomLayout = QtGui.QHBoxLayout()
bottomLayout.addWidget(self.buttonBox)
layout.addLayout(topLayout)
layout.addWidget(self.table_widget)
layout.addLayout(bottomLayout)
self.sentence_spinbox.valueChanged.connect(self.set_cur_rein_num)
self.setLayout(layout)
self.updateSignal.connect(self.update_table)
QtCore.QObject.connect(
self.table_widget,
QtCore.SIGNAL("cellDoubleClicked(int, int)"),
self.popup)
def closeEvent(self, *args, **kwargs):
# reimplementation of the close-event for closing down everything
# when the main window is closed
QtCore.QCoreApplication.quit()
return QtGui.QWidget.closeEvent(self, *args, **kwargs)
def setCellLimit(self, value):
if value == 0:
value = float("inf")
self.cell_limit = value
def setPath(self):
path = QtGui.QFileDialog.getOpenFileName(self,
"Select File", self.pathLabel.text())
if path:
self.goto_button.setDisabled(False)
self.prev_button.setDisabled(False)
self.next_button.setDisabled(False)
self.pathLabel.setText(QtCore.QDir.toNativeSeparators(path))
self.path = unicode(path)
self.data = my_DI.DataInput(self.path)
try:
if self.format == "syntax":
self.data.read_syntax()
elif self.format == "phrase":
self.data.read_phrase()
elif self.format == "syntaxCube":
self.data.read_syntax_cubes(self.cell_limit)
elif self.format == "phraseStackFlag":
self.data.read_phrase_stack_flag(self.cell_limit)
elif self.format == "phraseStackVerbose":
self.data.read_phrase_stack_verbose(self.cell_limit)
elif self.format == "syntaxCubeFlag":
self.data.read_syntax_cube_flag(self.cell_limit)
elif self.format == "mbot":
self.data.read_mbot(self.cell_limit)
self.populate(0)
self.sentence_spinbox.setValue(0)
except (ValueError, IndexError) as exc:
self.error_dialog = QtGui.QDialog()
self.error_dialog.setModal(True)
layout = QtGui.QVBoxLayout()
text = QtGui.QLabel(
"""Something went wrong when choosing your input format/file
\n""")
button = QtGui.QPushButton("Ok")
button.clicked.connect(self.error_dialog.close)
layout.addWidget(text)
layout.addWidget(button)
self.error_dialog.setLayout(layout)
self.error_dialog.show()
def next_parse(self):
self.cur_rein_num += 1
if self.cur_rein_num < 0:
self.cur_rein_num = len(self.data.sentences) + self.cur_rein_num
if self.cur_rein_num >= len(self.data.sentences):
self.cur_rein_num = 0
self.sentence_spinbox.setValue(self.cur_rein_num)
self.populate(self.cur_rein_num)
def prev_parse(self):
self.cur_rein_num -= 1
if self.cur_rein_num < 0:
self.cur_rein_num = len(self.data.sentences) + self.cur_rein_num
if self.cur_rein_num >= len(self.data.sentences):
self.cur_rein_num = 0
self.sentence_spinbox.setValue(self.cur_rein_num)
self.populate(self.cur_rein_num)
def cur_parse(self):
if self.cur_rein_num >= len(self.data.sentences):
self.cur_rein_num = 0
self.sentence_spinbox.setValue(self.cur_rein_num)
self.populate(self.cur_rein_num)
def set_cur_rein_num(self, value):
self.cur_rein_num = value # self.sentence_spinbox.value()
def populate(self, cur_rein_num):
cur_sent = self.data.sentences[cur_rein_num]
nrows, ncols = cur_sent.length + 1, cur_sent.length + 1
nrows, ncols = ncols, nrows # switcher
self.table_widget.setSortingEnabled(False)
self.table_widget.setRowCount(nrows)
self.table_widget.setColumnCount(ncols)
# for starting the numbering of the table at zero as the spans
self.table_widget.setHorizontalHeaderLabels([str(x) for x in range(ncols)])
self.table_widget.setVerticalHeaderLabels([str(x) for x in range(nrows)])
for i in range(nrows):
for j in range(ncols):
try:
# item = TableItem("%s:%s \n %s"
# % (i+1, j+1, cur_sent.spans[(i,j)]))
item = str(i) + ".." + str(j) + " \n"
if isinstance(cur_sent.spans[(i, j)], basestring):
item += cur_sent.spans[(i, j)] + "\n"
else:
for rule in cur_sent.spans[(i, j)]:
item += str(rule) + "\n"
if cur_sent.spans[(i, j)] == []:
if j - i < 0:
item = ""
else:
item = "-"
item = TableItem(item.decode("utf-8"))
except KeyError:
if j - i < 0:
item = QtGui.QTableWidgetItem("")
else:
item = QtGui.QTableWidgetItem("-")
self.table_widget.setItem(i, j, item)
self.table_widget.setColumnWidth(j, 40)
# self.connect(
# self.table_widget, QtCore.SIGNAL("itemDoubleClicked(QTableWidgetItem)"),
# self.popup)
self.updateSignal.emit()
self.table_widget.setSortingEnabled(True)
def update_table(self):
self.table_widget.sortItems(0, QtCore.Qt.DescendingOrder)
def set_format_syntax(self):
self.format = "syntax"
self.format_drop.setText("Syntax")
self.format_drop.menu().hide()
def set_format_phrase(self):
self.format = "phrase"
self.format_drop.setText("Phrase")
self.format_drop.menu().hide()
def set_format_syntaxCube(self):
self.format = "syntaxCube"
self.format_drop.setText("Syntax Cube (-Tall flag)")
self.format_drop.menu().hide()
def set_format_phraseStackFlag(self):
self.format = "phraseStackFlag"
self.format_drop.setText("Phrase Stack (search-graph)")
self.format_drop.menu().hide()
def set_format_phraseStackVerbose(self):
self.format = "phraseStackVerbose"
self.format_drop.setText("Phrase Stack (verbose)")
self.format_drop.menu().hide()
def set_format_syntaxCubeFlag(self):
self.format = "syntaxCubeFlag"
self.format_drop.setText("Syntax Cube (search-graph)")
self.format_drop.menu().hide()
def set_format_mbot(self):
self.format = "mbot"
self.format_drop.setText("MBOT")
self.format_drop.menu().hide()
# @QtCore.pyqtSlot(QtGui.QTableWidgetItem, result=QtCore.QObject)
# def popup(self, item):
# @pyqtSlot(int, int, result=QtCore.QObject)
# @pyqtSignature("popup(int int)")
def popup(self, r, c):
# """ C++: QObject popup(int, int) """
# self.w = PopUpCell(item.text)
self.w.append(PopUpCell(self.table_widget.item(r, c).text()))
# self.w.setGeometry(QRect(100, 100, 400, 200))
self.w[-1].show()
class HoverTable(QtGui.QTableWidget):
def __init__(self, parent=None):
super(HoverTable, self).__init__(parent)
self.setMouseTracking(True)
self.horizontalHeader().setClickable(False)
# self.verticalHeader().setDefaultSectionSize(self.verticalHeader.fontMetrics().height()+2);
class PopUpCell(QtGui.QWidget):
def __init__(self, cell_text):
QtGui.QWidget.__init__(self)
layout = QtGui.QHBoxLayout()
text_list = map(lambda x: x, cell_text.split("\n"))
wind_cont = QtGui.QTextEdit() # "<br/>".join(text_list[1:]))
wind_cont.setReadOnly(True)
wind_cont.setWindowTitle(text_list[0])
wind_cont.setPlainText(cell_text) # "\n".join(text_list))
layout.addWidget(wind_cont)
self.setWindowTitle(text_list[0])
self.setLayout(layout)
self.resize(960, 320)
class TableItem(QtGui.QTableWidgetItem):
def __init__(self, cell_text, type=1000):
super(TableItem, self).__init__(cell_text)
if len(cell_text.split("\n")) > 20:
self.setToolTip("\n".join(cell_text.split("\n")[:19]))
else:
self.setToolTip(cell_text)
self.cell_text = cell_text

16
contrib/DIMwid/DIMwid.py Normal file
View File

@ -0,0 +1,16 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
from PyQt4 import QtCore, QtGui
import DIMterface as my_gui
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
wnd = my_gui.MainWindow()
wnd.resize(640, 480)
wnd.setWindowTitle("DIMwid")
wnd.show()
sys.exit(app.exec_())

20
contrib/DIMwid/LICENSE Normal file
View File

@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2013 RobinQrtz
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

67
contrib/DIMwid/README.md Normal file
View File

@ -0,0 +1,67 @@
DIMwid
======
DIMwid (Decoder Inspection for Moses using widgets) is a tool
presenting Moses' different chart/stack outputs in a readable tabular
view.
Installation
============
In order to run DIMwid you need to install PyQt, Qt 4.8 and Python
2.7. Other versions have not yet been tested. Linux/Unix users simply
install these packages using their package-manager or built them from
source. Windows can skip the installation of Qt since PyQt itself
does cover everything, except Python.
Usage
=====
Users are recommended to read the accompanying paper "DIMwid --
Decoder Inspection for Moses (using Widgets)" appearing in PBML XY.
DIMwid is able to read multiple decoder outputs of the Moses
translation system. These include the standard trace outputs for both
phrase- and syntax-based decoding, the search-graphs for both, the
"level 3 verbose" output for phrase-based and a special trace output
(available as a Moses fork at :
https://github.com/RobinQrtz/mosesdecoder) for all possible
translations for syntax-based decoding.
After producing the outputs from Moses, start DIMwid by running
DIMwid.py and first select your format and after that your file. If
you have chosen the wrong file or format an error message will
appear. Otherwise you will see the first sentence. Cells can be
inspected by either double-clicking, opening a new window with the
full content, or hovering over the cell, showing a tooltip with the
first 20 lines of the cell's content.
If needed, the user can restrict the number of rules per cell, using
the "Cell Limit" spinbox.
Navigating through the sentences of the input file can be done by
either using the "Next" and "Prev" buttons, or choosing a certain
sentence number using the lower left spinbox and clicking the "GoTo"
button.
Moses
=====
Information about Moses can be found here: http://statmt.org/moses/
The used flags for the output are:
* -t for phrase-based trace
* -T for syntax-based trace
* -v 3 for phrase-based verbose level 3
* -output-search-graph for both search graphs
* -Tall for the Moses fork's new feature
Trouble
=======
If you are running into trouble using DIMwid or have suggestions for
improvements or new features email me at
robin DOT qrtz AT gmail DOT com

View File

@ -0,0 +1,101 @@
# -*- mode: makefile; tab-width: 4; -*-
# Makefile for installing 3rd-party software required to build Moses.
# author: Ulrich Germann
#
# run as
# make -f /path/to/this/file
#
# By default, everything will be installed in ./opt.
# If you want an alternative destination specify PREFIX=... with the make call
#
# make -f /path/to/this/file PREFIX=/where/to/install/things
#
# The name of the current directory must not contain spaces! The build scripts for
# at least some of the external software can't handle them.
space :=
space +=
# $(CWD) may contain space, safepath escapes them
# Update: doesn't work, because the build scripts for some of the external packages
# can't handle spaces in path names.
safepath=$(subst $(space),\$(space),$1)
# current working directory: bit of a hack to get the nfs-accessible
# path instead of the local real path
CWD := $(shell cd . && pwd)
# by default, we install in ./opt and build in ./build
PREFIX ?= $(CWD)/opt
BUILD_DIR = $(CWD)/opt/build/${URL}
# you can also specify specific prefixes for different packages:
XMLRPC_PREFIX ?= ${PREFIX}
CMPH_PREFIX ?= ${PREFIX}
IRSTLM_PREFIX ?= ${PREFIX}/irstlm-5.80.08
BOOST_PREFIX ?= ${PREFIX}
# currently, the full enchilada means xmlrpc-c, cmph, irstlm, boost
all: xmlrpc cmph irstlm boost
# we use bash and fail when pipelines fail
SHELL = /bin/bash -e -o pipefail
# evaluate prefixes now to avoid recursive evaluation problems later ...
XMLRPC_PREFIX := ${XMLRPC_PREFIX}
CMPH_PREFIX := ${CMPH_PREFIX}
IRSTLM_PREFIX := ${IRSTLM_PREFIX}
BOOST_PREFIX := ${BOOST_PREFIX}
# Code repositories:
github = https://github.com/
sourceforge = http://downloads.sourceforge.net/project
# functions for building software from sourceforge
nproc := $(shell getconf _NPROCESSORS_ONLN)
sfget = mkdir -p '${TMP}' && cd '${TMP}' && wget -qO- ${URL} | tar xz
configure-make-install = cd '$1' && ./configure --prefix='${PREFIX}'
configure-make-install += && make -j${nproc} && make install
# XMLRPC-C for moses server
xmlrpc: URL=$(sourceforge)/xmlrpc-c/Xmlrpc-c%20Super%20Stable/1.33.17/xmlrpc-c-1.33.17.tgz
xmlrpc: TMP=$(CWD)/build/xmlrpc
xmlrpc: override PREFIX=${XMLRPC_PREFIX}
xmlrpc: | $(call safepath,${XMLRPC_PREFIX}/bin/xmlrpc-c-config)
$(call safepath,${XMLRPC_PREFIX}/bin/xmlrpc-c-config):
$(sfget)
$(call configure-make-install,${TMP}/xmlrpc-c-1.33.17)
rm -rf ${TMP}
# CMPH for CompactPT
cmph: URL=$(sourceforge)/cmph/cmph/cmph-2.0.tar.gz
cmph: TMP=$(CWD)/build/cmph
cmph: override PREFIX=${CMPH_PREFIX}
cmph: | $(call safepath,${CMPH_PREFIX}/bin/cmph)
$(call safepath,${CMPH_PREFIX}/bin/cmph):
$(sfget)
$(call configure-make-install,${TMP}/cmph-2.0)
rm -rf ${TMP}
# irstlm for irstlm
irstlm: URL=$(sourceforge)/irstlm/irstlm/irstlm-5.80/irstlm-5.80.08.tgz
irstlm: TMP=$(CWD)/build/irstlm
irstlm: VERSION=$(basename $(notdir $(irstlm_url)))
irstlm: override PREFIX=${IRSTLM_PREFIX}
irstlm: | $(call safepath,$(IRSTLM_PREFIX)/bin/build-lm.sh)
$(call safepath,$(IRSTLM_PREFIX)/bin/build-lm.sh):
$(sfget)
cd $$(find '${TMP}' -name trunk) && ./regenerate-makefiles.sh \
&& ./configure --prefix='${PREFIX}' && make -j${nproc} && make install -j${nproc}
rm -rf ${TMP}
# boost
boost: URL=http://sourceforge.net/projects/boost/files/boost/1.59.0/boost_1_59_0.tar.gz/download
boost: TMP=$(CWD)/build/boost
boost: override PREFIX=${BOOST_PREFIX}
boost: | $(call safepath,${BOOST_PREFIX}/include/boost)
$(call safepath,${BOOST_PREFIX}/include/boost):
$(sfget)
cd '${TMP}/boost_1_59_0' && ./bootstrap.sh && ./b2 --prefix=${PREFIX} -j${nproc} install
rm -rf ${TMP}

View File

@ -0,0 +1,58 @@
Arrow Based Moses Training Pipeline
===================================
This demonstration implements a training pipeline that is shown in the Dia diagram in documentation/training-pipeline/moses-pypeline.dia.
The demo has been tested with:
- Moses v1.0
- Giza++ v1.0.7
- IRSTLM v5.70.04
Setup
-----
To use the demonstration you must first initialise the git submodules for this clone. Return to the top level directory and issue the following command:
$ git submodule update --init --recursive
This will clone PCL, available at Github (git://github.com/ianj-als/pcl.git), and Pypeline submodules, available at GitHub (git://github.com/ianj-als/pypeline.git).
Return to the arrow-pipelines contrib directory:
$ cd contrib/arrow-pipelines
To use the PCL compiler and run-time set the following environment variables (assuming Bash shell):
$ export PATH=$PATH:`pwd`/python/pcl/src/pclc:`pwd`/python/pcl/src/pcl-run
$ export PYTHONPATH=$PYTHONPATH:`pwd`/python/pcl/libs/pypeline/src
$ export PCL_IMPORT_PATH=`pwd`/python/pcl/src/runtime:`pwd`/pcl
Three environment variables need to be set before the pipeline can be run, they are:
- MOSES_HOME : The directory where Moses has been cloned, or installed,
- IRSTLM : The installation directory of your IRSTLM, and
- GIZA_HOME : The installation directory of GIZA++.
Building the example training pipeline
--------------------------------------
$ cd pcl
$ make
Running the example training pipeline
-------------------------------------
To execute the training pipeline run the following command:
$ pcl-run.py training_pipeline
Once complete the output of the pipeline can be found in the directories:
- training/tokenisation
- training/model
- training/lm
- training/mert

View File

@ -0,0 +1,226 @@
#!/bin/bash
MOSES_HOME=/opt/moses
GIZA_HOME=${MOSES_HOME}/giza++-v1.0.7
IRSTLM=${MOSES_HOME}/irstlm-5.70.04
function tokenise() {
local LANG="$1"
local FILENAME="$2"
local WORKING_DIR="$3"
local BASENAME="`basename ${FILENAME}`"
if [ ! -f ${WORKING_DIR} ]; then
mkdir -p ${WORKING_DIR}
fi
NEW_BASENAME=`echo ${BASENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) { printf a[i]; if (i<length(a)) { printf "."; } if (i==length(a)-1) { printf "tok."; } } }'`
TOKENISED_FILENAME="${WORKING_DIR}/${NEW_BASENAME}"
${MOSES_HOME}/scripts/tokenizer/tokenizer.perl -q -l ${LANG} < ${FILENAME} > ${TOKENISED_FILENAME}
}
function cleanup() {
local SRC_FILENAME="$1"
local TGT_FILENAME="$2"
local SEGMENT_LENGTH="$3"
SRC_CLEANUP_FILENAME=`echo ${SRC_FILENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) { printf a[i]; if (i<length(a)) { printf "."; } if (i==length(a)-1) { printf "clean."; } } }'`
TGT_CLEANUP_FILENAME=`echo ${TGT_FILENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) { printf a[i]; if (i<length(a)) { printf "."; } if (i==length(a)-1) { printf "clean."; } } }'`
truncate -s 0 ${SRC_CLEANUP_FILENAME}
truncate -s 0 ${TGT_CLEANUP_FILENAME}
paste -d'\n' ${SRC_FILENAME} ${TGT_FILENAME} | while read SRC_LINE && read TGT_LINE;
do
declare -i SRC_NO_WORDS=`echo "${SRC_LINE}" | wc -w`
declare -i TGT_NO_WORDS=`echo "${TGT_LINE}" | wc -w`
if [ ${SRC_NO_WORDS} -lt 20 -a ${TGT_NO_WORDS} -lt 20 ]; then
echo "${SRC_LINE}" >> ${SRC_CLEANUP_FILENAME}
echo "${TGT_LINE}" >> ${TGT_CLEANUP_FILENAME}
fi
done
}
function data_split() {
local SRC_FILENAME="$1"
local TGT_FILENAME="$2"
declare -i DEV_SIZE="$3"
declare -i EVAL_SIZE="$4"
SRC_TRAIN_FILENAME=`echo ${SRC_FILENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) { printf a[i]; if (i<length(a)) { printf "."; } if (i==length(a)-1) { printf "train."; } } }'`
TGT_TRAIN_FILENAME=`echo ${TGT_FILENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) { printf a[i]; if (i<length(a)) { printf "."; } if (i==length(a)-1) { printf "train."; } } }'`
SRC_DEVEL_FILENAME=`echo ${SRC_FILENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) { printf a[i]; if (i<length(a)) { printf "."; } if (i==length(a)-1) { printf "devel."; } } }'`
TGT_DEVEL_FILENAME=`echo ${TGT_FILENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) { printf a[i]; if (i<length(a)) { printf "."; } if (i==length(a)-1) { printf "devel."; } } }'`
SRC_EVAL_FILENAME=`echo ${SRC_FILENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) { printf a[i]; if (i<length(a)) { printf "."; } if (i==length(a)-1) { printf "eval."; } } }'`
TGT_EVAL_FILENAME=`echo ${TGT_FILENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) { printf a[i]; if (i<length(a)) { printf "."; } if (i==length(a)-1) { printf "eval."; } } }'`
local ALL_FILES=(${SRC_TRAIN_FILENAME} ${TGT_TRAIN_FILENAME} ${SRC_DEVEL_FILENAME} ${TGT_DEVEL_FILENAME} ${SRC_EVAL_FILENAME} ${TGT_EVAL_FILENAME})
for FN in ${ALL_FILES}
do
truncate -s 0 ${FN}
done
declare -i DEV_EVAL_SIZE=$(($DEV_SIZE + $EVAL_SIZE))
declare -i LINE_CNT=1
paste -d'\n' ${SRC_FILENAME} ${TGT_FILENAME} | while read SRC_LINE && read TGT_LINE;
do
if [ ${LINE_CNT} -le ${DEV_EVAL_SIZE} ]; then
if [ ${LINE_CNT} -le ${DEV_SIZE} ]; then
echo "${SRC_LINE}" >> ${SRC_DEVEL_FILENAME}
echo "${TGT_LINE}" >> ${TGT_DEVEL_FILENAME}
else
echo "${SRC_LINE}" >> ${SRC_EVAL_FILENAME}
echo "${TGT_LINE}" >> ${TGT_EVAL_FILENAME}
fi
else
echo "${SRC_LINE}" >> ${SRC_TRAIN_FILENAME}
echo "${TGT_LINE}" >> ${TGT_TRAIN_FILENAME}
fi
LINE_CNT=$(($LINE_CNT + 1))
done
}
function translation_model_train() {
declare -l TT_SRC_LANG="$1"
declare -l TT_TGT_LANG="$2"
local SRC_FILENAME="`realpath $3`"
local TGT_FILENAME="`realpath $4`"
local ALIGNMENT_METHOD="$5"
local REORDERING_METHOD="$6"
local WORKING_DIR="$7"
declare -r SRC_CORPORA_NAME=`echo ${SRC_FILENAME} | gawk '{split($0, a, "."); for(i = 1; i < length(a); i++) { printf a[i]; if (i < length(a) - 1) { printf "."; } } }'`
declare -r TGT_CORPORA_NAME=`echo ${TGT_FILENAME} | gawk '{split($0, a, "."); for(i = 1; i < length(a); i++) { printf a[i]; if (i < length(a) - 1) { printf "."; } } }'`
if [ "${SRC_CORPORA_NAME}" != "${TGT_CORPORA_NAME}" ]; then
echo "Arrrgh"
exit 1
fi
if [ -f ${WORKING_DIR} ]; then
rm -Rf ${WORKING_DIR} >& /dev/null
fi
mkdir -p ${WORKING_DIR}
WORKING_DIR=`realpath ${WORKING_DIR}`
declare -r DUMMY_FILE="${WORKING_DIR}/dummy.lm"
echo "dummy lm file" > ${DUMMY_FILE}
declare -r LOG_FILE="${WORKING_DIR}/log"
${MOSES_HOME}/scripts/training/train-model.perl -root-dir ${WORKING_DIR} -corpus ${SRC_CORPORA_NAME} -f ${TT_SRC_LANG} -e ${TT_TGT_LANG} -alignment ${ALIGNMENT_METHOD} -reordering ${REORDERING_METHOD} -lm 0:5:${DUMMY_FILE}:0 -external-bin-dir ${GIZA_HOME} 2> ${LOG_FILE}
MOSES_INI_FILE="${WORKING_DIR}/model/moses.ini"
}
function language_model_train() {
local FILENAME="$1"
local SMOOTHING_METHOD="$2"
local WORKING_DIR="$3"
if [ ! -f ${WORKING_DIR} ]; then
mkdir -p ${WORKING_DIR}
fi
declare -r BASENAME=`basename ${FILENAME}`
declare -r START_END_OUTPUT_FILENAME=${WORKING_DIR}/`echo ${BASENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) {if(i == 3) { printf "sb."; } else { printf a[i]; if (i < length(a) - 1) { printf "."; } } } }'`
declare -r LM_FILENAME=${WORKING_DIR}/`echo ${BASENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) {if(i == 3) { printf "lm."; } else { printf a[i]; if (i < length(a) - 1) { printf "."; } } } }'`
COMPILED_LM_FILENAME=${WORKING_DIR}/`echo ${BASENAME} | gawk '{split($0, a, "."); for(i = 1; i <= length(a); i++) {if(i == 3) { printf "arpa."; } else { printf a[i]; if (i < length(a) - 1) { printf "."; } } } }'`
export IRSTLM
${IRSTLM}/bin/add-start-end.sh < ${FILENAME} > ${START_END_OUTPUT_FILENAME}
declare -r TMP_DIR=`mktemp -dp /tmp`
${IRSTLM}/bin/build-lm.sh -i ${START_END_OUTPUT_FILENAME} -t ${TMP_DIR} -p -s ${SMOOTHING_METHOD} -o ${LM_FILENAME}
if [ -f ${TMP_DIR} ]; then
rm -Rf ${TMP_DIR} >& /dev/null
fi
${IRSTLM}/bin/compile-lm --text yes ${LM_FILENAME}.gz ${COMPILED_LM_FILENAME}
}
function mert() {
local MOSES_INI_FILENAME="`realpath $1`"
local COMPILED_LM_FILENAME="`realpath $2`"
local EVAL_FILENAME="$3"
declare -lr _SRC_LANG="$4"
declare -lr _TGT_LANG="$5"
declare -ri MODEL_ORDER="$6"
declare -ri MODEL_TYPE="$7"
local WORKING_DIR="$8"
declare -ri MAX_NO_ITERS="$9"
local INFILENAME=`realpath ${EVAL_FILENAME}`
INFILENAME=`echo ${INFILENAME} | gawk '{split($0, a, "."); for(i = 1; i < length(a); i++) { printf a[i]; if (i < length(a) - 1) { printf "."; } } }'`
if [ ! -f ${MOSES_INI_FILENAME} ]; then
echo "${MOSES_INI_FILENAME} does not exist."
exit 1
fi
if [ -f ${WORKING_DIR} ]; then
rm -Rf ${WORKING_DIR} >& /dev/null
fi
mkdir -p ${WORKING_DIR}
WORKING_DIR=`realpath ${WORKING_DIR}`
MERT_INI_FILENAME="${WORKING_DIR}/trained-moses.ini"
local SED_PROG="/\[lmodel-file\]/,/^[[:space:]]*\$/c\[lmodel-file\]\n${MODEL_TYPE} 0 ${MODEL_ORDER} ${COMPILED_LM_FILENAME}\n"
eval cat ${MOSES_INI_FILENAME} | sed "${SED_PROG}" > ${MERT_INI_FILENAME}
${MOSES_HOME}/scripts/training/mert-moses.pl --maximum-iterations ${MAX_NO_ITERS} --mertdir ${MOSES_HOME}/bin --working-dir ${WORKING_DIR} ${INFILENAME}.${_SRC_LANG} ${INFILENAME}.${_TGT_LANG} ${MOSES_HOME}/bin/moses ${MERT_INI_FILENAME} 2> ${WORKING_DIR}/log
}
if [ $# -lt 4 ]; then
echo "`basename $0` usage:"
echo " `basename $0` src_file tgt_file src_lang tgt_lang"
echo
exit 1
fi
declare -r SRC_LANG="$3"
declare -r TGT_LANG="$4"
# Tokenise
tokenise "${SRC_LANG}" "$1" "training/tokeniser"
declare -r SRC_TOKENISED_FILENAME="${TOKENISED_FILENAME}"
tokenise "${TGT_LANG}" "$2" "training/tokeniser"
declare -r TGT_TOKENISED_FILENAME="${TOKENISED_FILENAME}"
echo ${SRC_TOKENISED_FILENAME}
echo ${TGT_TOKENISED_FILENAME}
# Cleanup
cleanup "${SRC_TOKENISED_FILENAME}" "${TGT_TOKENISED_FILENAME}" 20
echo ${SRC_CLEANUP_FILENAME}
echo ${TGT_CLEANUP_FILENAME}
# Data split: src, tgt, dev size, eval size
data_split "${SRC_CLEANUP_FILENAME}" "${TGT_CLEANUP_FILENAME}" 1000 500
echo ${SRC_TRAIN_FILENAME}
echo ${TGT_TRAIN_FILENAME}
echo ${SRC_DEVEL_FILENAME}
echo ${TGT_DEVEL_FILENAME}
echo ${SRC_EVAL_FILENAME}
echo ${TGT_EVAL_FILENAME}
# Train the translation model
translation_model_train "${SRC_LANG}" "${TGT_LANG}" "${SRC_DEVEL_FILENAME}" "${TGT_DEVEL_FILENAME}" "grow-diag-final-and" "msd-bidirectional-fe" "training/model"
declare -r MOSES_TT_INI_FILENAME="${MOSES_INI_FILE}"
echo ${MOSES_TT_INI_FILENAME}
# Language model training
language_model_train "${TGT_TOKENISED_FILENAME}" "improved-kneser-ney" "training/lm"
echo ${COMPILED_LM_FILENAME}
# MERT
mert "${MOSES_TT_INI_FILENAME}" "${COMPILED_LM_FILENAME}" "${SRC_EVAL_FILENAME}" "${SRC_LANG}" "${TGT_LANG}" 3 9 "training/mert" 1
echo ${MERT_INI_FILENAME}

View File

@ -0,0 +1,23 @@
CC = pclc.py
CFLAGS=-i
SOURCES = training_pipeline.pcl
OBJS = $(SOURCES:.pcl=.py)
SUBDIRS = components
all: subdirs build
build: $(OBJS)
%.py: %.pcl
$(CC) $(CFLAGS) $<
clean:
for dir in $(SUBDIRS); do \
$(MAKE) -C $$dir clean; \
done
rm -f *.py *.pyc *.log *~
subdirs:
for dir in $(SUBDIRS); do \
$(MAKE) -C $$dir ; \
done

View File

@ -0,0 +1,24 @@
CC = pclc.py
CFLAGS = -i
SOURCES = src_trg_tokeniser.pcl translation_model_training.pcl
OBJS = $(SOURCES:.pcl=.py)
SUBDIRS = wrappers
all: subdirs build
build: $(OBJS)
%.py: %.pcl
$(CC) $(CFLAGS) $<
clean:
for dir in $(SUBDIRS); do \
$(MAKE) -C $$dir clean; \
done
rm -f *.py *.pyc *.log *~
subdirs:
for dir in $(SUBDIRS); do \
$(MAKE) -C $$dir ; \
done

View File

@ -0,0 +1,10 @@
[Configuration]
tokeniser.src.language = en
tokeniser.src.tokenisation_dir = test_data/src_trg_tokenizer/tokenised
tokeniser.trg.language = lt
tokeniser.trg.tokenisation_dir = test_data/src_trg_tokenizer/tokenised
tokeniser.moses.installation = /opt/moses
[Inputs]
src_filename = test_data/src_trg_tokenizer/cleantrain.en
trg_filename = test_data/src_trg_tokenizer/cleantrain.lt

View File

@ -0,0 +1,40 @@
#
# Import all of the components to be composed
#
import wrappers.tokenizer.tokenizer as tokeniser
#
# Component definition
#
# +---------+ +---------+ +---------+ +---------+
# src_filename -->+ +--> filename -->+-- src --+--> tokenised_filename -->+---------+--> tokenised_filename -->+ +--> tokenised_src_filename
# | | | | | | | |
# trg_filename -->+ +--> filename -->+---------+-------> filename ------->+-- trg --+--> tokenised_filename -->+ +--> tokenised_trg_filename
# +---------+ +---------+ +---------+ +---------+
# Config: {language::String, Config: {language::String,
# tokenisation_dir::String, tokenisation_dir::String,
# moses_installation_dir::String} moses_installation_dir::String}
#
component src_trg_tokeniser
inputs (src_filename), (trg_filename)
outputs (tokenised_src_filename), (tokenised_trg_filename)
configuration tokeniser.src.language,
tokeniser.src.tokenisation_dir,
tokeniser.trg.language,
tokeniser.trg.tokenisation_dir,
tokeniser.moses.installation
declare
src_tokeniser := new tokeniser with
tokeniser.src.language -> corpus.language,
tokeniser.src.tokenisation_dir -> working.directory.root,
tokeniser.moses.installation -> moses.installation
trg_tokeniser := new tokeniser with
tokeniser.trg.language -> corpus.language,
tokeniser.trg.tokenisation_dir -> working.directory.root,
tokeniser.moses.installation -> moses.installation
as
wire (src_filename -> corpus.filename),
(trg_filename -> corpus.filename) >>>
(src_tokeniser *** trg_tokeniser) >>>
wire (corpus.tokenised.filename -> tokenised_src_filename),
(corpus.tokenised.filename -> tokenised_trg_filename)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,15 @@
[Configuration]
model_training.max_segment_length = 20
model_training.corpus.development_size = 4500
model_training.corpus.evaluation_size = 5000
model_training.src.language = en
model_training.trg.language = lt
model_training.method.alignment = grow-diag-final-and
model_training.method.reordering = msd-bidirectional-fe
model_training.moses.installation = /opt/moses
model_training.giza.installation = /opt/moses/giza++-v1.0.7
model_training.translation_model.dir = test_data/translation_model_training/translation_model
[Inputs]
src_filename = test_data/translation_model_training/cleantrain.en
trg_filename = test_data/translation_model_training/cleantrain.lt

View File

@ -0,0 +1,70 @@
#
# Import all of the components to be composed
#
import wrappers.cleanup.cleanup as cleanup
import wrappers.data_split.data_split as data_split
import wrappers.model_training.model_training as model_training
#
# Component definition
#
# {cleaned_src_filename, {src_filename, {[devel|eval|train]_src_filename, {src_filename, {moses_ini_file,
# cleaned_trg_filename} trg_filename} [devel|eval|train]_trg_filename} trg_filename} evaluation_data_filename}
# | | | | +-------+ |
# +-------+ | | +-------+ | +-------+ V | Model | {moses_ini_file} +-------+ V
# | Clean | V V | Data | V | +---------------->+ Train +----------------->+ Merge +----->
# {src_filename, -->+ +----->+ +------------->+ Split | +-------+ +---+---+
# trg_filename} | Up | | Split | | +---\ Config: {[src|trg]_language::String, ^
# +-------+ +-------+ +-------+ | alignment_method::String, |
# Config: {segment_length::Int} Config: {development_size::Int, | reordering_method::String, |
# evaluation_size::Int} | giza_installation_dir::String, |
# | model_directory::String} |
# \--------------------------------------------/
#
component translation_model_training
inputs src_filename, trg_filename
outputs evaluation_data_filename, moses_ini_filename
configuration model_training.max_segment_length,
model_training.corpus.development_size,
model_training.corpus.evaluation_size,
model_training.src.language,
model_training.trg.language,
model_training.method.alignment,
model_training.method.reordering,
model_training.moses.installation,
model_training.giza.installation,
model_training.translation_model.dir
declare
cleanup := new cleanup with
model_training.max_segment_length -> segment_length_limit
data_split := new data_split with
model_training.corpus.development_size -> development_data_size,
model_training.corpus.evaluation_size -> evaluation_data_size
model_training := new model_training with
model_training.src.language -> source_language,
model_training.trg.language -> target_language,
model_training.method.alignment -> alignment_method,
model_training.method.reordering -> reordering_method,
model_training.moses.installation -> moses_installation_dir,
model_training.giza.installation -> giza_installation_dir,
model_training.translation_model.dir -> translation_model_directory
as
cleanup >>>
wire cleaned_src_filename -> src_filename,
cleaned_trg_filename -> trg_filename >>>
data_split >>>
wire devel_src_filename -> devel_src_filename,
eval_src_filename -> evaluation_data_filename,
train_trg_filename -> _,
train_src_filename -> _,
eval_trg_filename -> _,
devel_trg_filename -> devel_trg_filename >>>
((wire devel_src_filename -> src_filename,
devel_trg_filename -> trg_filename,
evaluation_data_filename -> _ >>>
model_training) &&&
wire evaluation_data_filename -> evaluation_data_filename,
devel_src_filename -> _,
devel_trg_filename -> _) >>>
merge top[moses_ini_filename] -> moses_ini_filename,
bottom[evaluation_data_filename] -> evaluation_data_filename

View File

@ -0,0 +1,14 @@
SUBDIRS = tokenizer
all: subdirs
clean:
for dir in $(SUBDIRS); do \
$(MAKE) -C $$dir clean; \
done
subdirs:
for dir in $(SUBDIRS); do \
$(MAKE) -C $$dir ; \
done

View File

@ -0,0 +1,129 @@
def get_name():
return 'cleanup'
def get_inputs():
return ['src_filename', 'trg_filename']
def get_outputs():
return ['cleaned_src_filename', 'cleaned_trg_filename']
def get_configuration():
return ['segment_length_limit']
def configure(args):
return {'segment_length' : args['segment_length_limit']}
def initialise(config):
def _filter(limit, ifh1, ofh1, ifh2, ofh2):
def _short(line):
n = 0
for c in line:
if c == " ":
n += 1
return n < limit
for (l1, l2) in zip(ifh1, ifh2):
if _short(l1) and _short(l2):
print >>ofh1, l1,
print >>ofh2, l2,
def _make_cleaned_filename(filename):
bits = filename.split(".")
bits.insert(-1, "clean")
return ".".join(bits)
def _filter_main(a, s):
limit = config['segment_length']
(ifh1, ifh2, ofh1, ofh2) = (None, None, None, None)
try:
input_src_filename = a['src_filename']
input_trg_filename = a['trg_filename']
print "Cleanup: Cleaning [%s] and [%s]..." % (input_src_filename, input_trg_filename)
ifh1 = open(input_src_filename, "r")
ifh2 = open(input_trg_filename, "r")
cleaned_src_filename = _make_cleaned_filename(input_src_filename)
cleaned_trg_filename = _make_cleaned_filename(input_trg_filename)
ofh1 = open(cleaned_src_filename, "w")
ofh2 = open(cleaned_trg_filename, "w")
_filter(limit, ifh1, ofh1, ifh2, ofh2)
return {'cleaned_src_filename': cleaned_src_filename,
'cleaned_trg_filename': cleaned_trg_filename}
finally:
def _safe_close(fh):
if fh is not None:
fh.close()
_safe_close(ifh1)
_safe_close(ifh2)
_safe_close(ofh1)
_safe_close(ofh2)
return _filter_main
if __name__ == '__main__':
import os
import tempfile
import test.test as thelp
from pypeline.helpers.helpers import eval_pipeline
def _test_main():
configuration = {'segment_length_limit': 20}
src_filename = tempfile.mkstemp(suffix = ".src", dir = "/tmp")
trg_filename = tempfile.mkstemp(suffix = ".trg", dir = "/tmp")
box_eval = {
'src_filename': src_filename[1],
'trg_filename': trg_filename[1],
'cleaned_src_file_expected': src_filename[1] + ".expected",
'cleaned_trg_file_expected': trg_filename[1] + ".expected"}
try:
_prep_files(box_eval)
_run_test(configuration, box_eval)
finally:
_cleanup_files(box_eval)
def _run_test(configuration, box_eval):
box_config = configure(configuration)
box = initialise(box_config)
output = eval_pipeline(box, box_eval, box_config)
try:
thelp.diff(box_eval['cleaned_src_file_expected'], output['cleaned_src_filename'])
thelp.diff(box_eval['cleaned_trg_file_expected'], output['cleaned_trg_filename'])
finally:
os.unlink(output['cleaned_src_filename'])
os.unlink(output['cleaned_trg_filename'])
def _line(line_lengths):
def _gen_line(tokens):
return " ".join(map(lambda n: "tok" + str(n), range(tokens)))
return map(_gen_line, line_lengths)
def _prep_files(box_eval):
thelp.cat(box_eval['src_filename'], _line([10, 20, 30, 40, 17, 21]))
thelp.cat(box_eval['trg_filename'], _line([40, 30, 20, 10, 20, 21]))
thelp.cat(box_eval['cleaned_src_file_expected'], _line([17]))
thelp.cat(box_eval['cleaned_trg_file_expected'], _line([20]))
def _cleanup_files(box_eval):
try:
for key, filename in box_eval.items():
os.unlink(filename)
except:
pass
_test_main()

View File

@ -0,0 +1,7 @@
[Configuration]
evaluation_data_size = 7
development_data_size = 13
[Inputs]
src_filename = test_data/data.en
trg_filename = test_data/data.de

View File

@ -0,0 +1,144 @@
def get_name():
return 'data_split'
def get_inputs():
return ['src_filename', 'trg_filename']
def get_outputs():
return ['devel_src_filename', 'devel_trg_filename',
'eval_src_filename', 'eval_trg_filename',
'train_src_filename', 'train_trg_filename']
def get_configuration():
return ['evaluation_data_size', 'development_data_size']
def configure(args):
result = {}
result['evaluate_size'] = args['evaluation_data_size']
result['development_size'] = args['development_data_size']
return result
def initialise(config):
def _copy(size, inp, ofh1, ofh2):
try:
while size != 0:
(l1, l2) = inp.next()
print >>ofh1, l1,
print >>ofh2, l2,
size -= 1
except StopIteration:
pass
def _make_split_filename(filename, data_set):
bits = filename.split(".")
bits.insert(-1, data_set)
new_filename = ".".join(bits)
return new_filename
def _splitter_main(a, s):
(ifh1, ifh2, ofh1, ofh2) = (None, None, None, None)
try:
input_src_filename = a['src_filename']
input_trg_filename = a['trg_filename']
ifh1 = open(input_src_filename, "r")
ifh2 = open(input_trg_filename, "r")
inp = iter(zip(ifh1, ifh2))
result = {}
for (data_set, size) in [('devel', config['development_size']),
('eval', config['evaluate_size']),
('train', -1)]:
output_src_filename = _make_split_filename(input_src_filename, data_set)
output_trg_filename = _make_split_filename(input_trg_filename, data_set)
ofh1 = open(output_src_filename, "w")
ofh2 = open(output_trg_filename, "w")
_copy(size, inp, ofh1, ofh2)
result[data_set + '_src_filename'] = output_src_filename
result[data_set + '_trg_filename'] = output_trg_filename
return result
finally:
def _safe_close(fh):
if fh is not None:
fh.close()
_safe_close(ifh1)
_safe_close(ifh2)
_safe_close(ofh1)
_safe_close(ofh2)
return _splitter_main
if __name__ == '__main__':
import os
import tempfile
import test.test as thelp
from pypeline.helpers.helpers import eval_pipeline
def _test_main():
configuration = {'evaluation_data_size': 7,
'development_data_size': 13}
src_filename = tempfile.mkstemp(suffix = ".src", dir = "/tmp")
trg_filename = tempfile.mkstemp(suffix = ".trg", dir = "/tmp")
box_eval = {'src_filename': src_filename[1],
'trg_filename': trg_filename[1],
'devel_src_expected': src_filename[1] + ".devel.expected",
'devel_trg_expected': trg_filename[1] + ".devel.expected",
'eval_src_expected': src_filename[1] + ".eval.expected",
'eval_trg_expected': trg_filename[1] + ".eval.expected",
'train_src_expected': src_filename[1] + ".train.expected",
'train_trg_expected': trg_filename[1] + ".train.expected"}
try:
_prep_files(box_eval)
_run_test(configuration, box_eval)
finally:
_cleanup_files(box_eval)
def _run_test(configuration, box_eval):
box_config = configure(configuration)
box = initialise(box_config)
output = eval_pipeline(box, box_eval, box_config)
for data_set in ['devel', 'eval', 'train']:
for lang in ['src', 'trg']:
filename = output[data_set + '_' + lang + '_filename']
filename_expected = box_eval[data_set + '_' + lang + '_expected']
thelp.diff(filename_expected, filename)
def _line(line_lengths):
def _gen_line(tokens):
return " ".join(map(lambda n: "tok" + str(n), range(tokens)))
return map(_gen_line, line_lengths)
def _prep_files(box_eval):
thelp.cat(box_eval['src_filename'], _line(range(50)))
thelp.cat(box_eval['trg_filename'], _line(range(50)))
#expected output:
thelp.cat(box_eval['devel_src_expected'], _line(range(0,13)))
thelp.cat(box_eval['devel_trg_expected'], _line(range(0,13)))
thelp.cat(box_eval['eval_src_expected'], _line(range(13,20)))
thelp.cat(box_eval['eval_trg_expected'], _line(range(13,20)))
thelp.cat(box_eval['train_src_expected'], _line(range(20,50)))
thelp.cat(box_eval['train_trg_expected'], _line(range(20,50)))
def _cleanup_files(box_eval):
try:
for key, filename in box_eval.items():
os.unlink(filename)
except:
pass
_test_main()

View File

@ -0,0 +1,50 @@
tok0
tok0 tok1
tok0 tok1 tok2
tok0 tok1 tok2 tok3
tok0 tok1 tok2 tok3 tok4
tok0 tok1 tok2 tok3 tok4 tok5
tok0 tok1 tok2 tok3 tok4 tok5 tok6
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43 tok44
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43 tok44 tok45
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43 tok44 tok45 tok46
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43 tok44 tok45 tok46 tok47
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43 tok44 tok45 tok46 tok47 tok48

View File

@ -0,0 +1,50 @@
tok0
tok0 tok1
tok0 tok1 tok2
tok0 tok1 tok2 tok3
tok0 tok1 tok2 tok3 tok4
tok0 tok1 tok2 tok3 tok4 tok5
tok0 tok1 tok2 tok3 tok4 tok5 tok6
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43 tok44
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43 tok44 tok45
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43 tok44 tok45 tok46
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43 tok44 tok45 tok46 tok47
tok0 tok1 tok2 tok3 tok4 tok5 tok6 tok7 tok8 tok9 tok10 tok11 tok12 tok13 tok14 tok15 tok16 tok17 tok18 tok19 tok20 tok21 tok22 tok23 tok24 tok25 tok26 tok27 tok28 tok29 tok30 tok31 tok32 tok33 tok34 tok35 tok36 tok37 tok38 tok39 tok40 tok41 tok42 tok43 tok44 tok45 tok46 tok47 tok48

View File

@ -0,0 +1,117 @@
import os
import shutil
import subprocess
import tempfile
def get_name():
return 'irstlm_build'
def get_inputs():
return ['input_filename']
def get_outputs():
return ['add_start_end_filename', 'lm_filename', 'compiled_lm_filename']
def get_configuration():
return ['irstlm_installation_dir', 'irstlm_smoothing_method', 'language_model_directory']
def configure(args):
config = dict()
config['irstlm_install_directory'] = args['irstlm_installation_dir']
config['smoothing_method'] = args['irstlm_smoothing_method']
config['lm_directory'] = args['language_model_directory']
return config
def initialise(config):
def process(a, s):
# Create the LM directory if we need to
if os.path.exists(config['lm_directory']) is False:
os.makedirs(config['lm_directory'])
# The filename of the file to chew through
start_end_input_filename = a['input_filename']
if os.path.exists(start_end_input_filename) is False:
raise Exception("IRSTLM Build: Input file could not be found at [%s]" % start_end_input_filename)
# Derive the output file name for the add start-end marker processor
filename_bits = os.path.basename(start_end_input_filename).split(".")
filename_bits[2] = "sb";
start_end_output_filename = os.path.join(config['lm_directory'], ".".join(filename_bits))
# Derive the output file name of the LM build
filename_bits[2] = "lm"
lm_filename = os.path.join(config['lm_directory'], ".".join(filename_bits))
# Derive the compiled LM file name
filename_bits[2] = "arpa"
compiled_lm_filename = os.path.join(config['lm_directory'], ".".join(filename_bits))
# First thing to do is add start and end markers
start_end_cmdline = [os.path.join(config['irstlm_install_directory'], "bin", "add-start-end.sh")]
infile = open(start_end_input_filename, 'r')
outfile = open(start_end_output_filename, 'w')
print "IRSTLM Build: Invoking [%s]..." % " ".join(start_end_cmdline)
return_code = subprocess.check_call(start_end_cmdline, stdin = infile, stdout = outfile)
if return_code:
raise Exception("IRSTLM add start and end markers failed: input file = [%s], output file = [%s], return code = [%d]" % \
start_end_input_filename, start_end_output_filename, return_code)
# Next build the language model
tmp_dir = tempfile.mkdtemp(dir = "/tmp")
try:
build_lm_cmdline = [os.path.join(config['irstlm_install_directory'], "bin", "build-lm.sh"),
"-i", start_end_output_filename,
"-t", tmp_dir,
"-p",
"-s", config['smoothing_method'],
"-o", lm_filename]
print "IRSTLM Build: Invoking [%s]..." % " ".join(build_lm_cmdline)
return_code = subprocess.check_call(build_lm_cmdline)
if return_code:
raise Exception("IRST language model failed to build: return code = [%d]" % return_code)
finally:
if os.path.exists(tmp_dir):
shutil.rmtree(tmp_dir)
# Compile the LM
lm_filename = lm_filename + ".gz"
compile_lm_cmdline = [os.path.join(config['irstlm_install_directory'], "bin", "compile-lm"),
"--text", "yes",
lm_filename,
compiled_lm_filename]
print "IRSTLM Build: Invoking [%s]..." % " ".join(compile_lm_cmdline)
return_code = subprocess.check_call(compile_lm_cmdline)
if return_code:
raise Exception("IRST language model compilation failed: return code = [%d]" % return_code)
output = {'add_start_end_filename': start_end_output_filename,
'lm_filename': lm_filename,
'compiled_lm_filename': compiled_lm_filename}
print "IRSTLM Build: Output = %s" % output
return output
return process
if __name__ == '__main__':
from pypeline.helpers.helpers import eval_pipeline, cons_function_component
lm_dir = os.environ["PWD"]
configuration = {'irstlm_root': os.environ["IRSTLM"],
'irstlm_smoothing_method': 'improved-kneser-ney',
'language_model_directory': lm_dir}
component_config = configure(configuration)
component = initialise(component_config)
value = eval_pipeline(cons_function_component(component),
{'input_filename': '/Users/ianjohnson/Dropbox/Documents/MTM2012/tokenised_files/news-commentary-v7.fr-en.tok.en'},
component_config)
target = {'add_start_end_filename': os.path.join(lm_dir, 'news-commentary-v7.fr-en.sb.en'),
'lm_filename': os.path.join(lm_dir, 'news-commentary-v7.fr-en.lm.en.gz'),
'compiled_lm_filename': os.path.join(lm_dir, 'news-commentary-v7.fr-en.arpa.en')}
print "Target: %s" % target
if value != target:
raise Exception("Massive fail!")

View File

@ -0,0 +1,98 @@
import os
import shutil
import subprocess
def get_name():
return 'mert'
def get_inputs():
return ['evaluation_data_filename', 'trg_language_model_filename',
'trg_language_model_order', 'trg_language_model_type',
'moses_ini_filename']
def get_outputs():
return ['moses_ini_filename']
def get_configuration():
return ['source_language', 'target_language',
'moses_installation_dir', 'mert_working_directory',
'mert_max_no_iterations']
def configure(args):
result = {}
result['src_lang'] = args['source_language']
result['trg_lang'] = args['target_language']
result['moses_installation_dir'] = args['moses_installation_dir']
result['mert_working_dir'] = args['mert_working_directory']
result['max_no_iterations'] = args['mert_max_no_iterations']
return result
def initialise(config):
def process(a, s):
infilename = os.path.abspath(a['evaluation_data_filename'])
infilename = ".".join(infilename.split(".")[:-1])
lm_file = os.path.abspath(a['trg_language_model_filename'])
lm_order = int(a['trg_language_model_order'])
lm_type = int(a['trg_language_model_type'])
max_no_iters = int(config['max_no_iterations'])
orig_moses_ini = os.path.abspath(a['moses_ini_filename'])
if not os.path.exists(orig_moses_ini):
raise Exception, "Error: Input moses.ini does not exist"
workdir = os.path.abspath(config['mert_working_dir'])
#simply call the training perl script
#remove the workdir if it is already there
if os.path.exists(workdir):
shutil.rmtree(workdir)
os.makedirs(workdir)
#local vars
moses_install_dir = os.path.abspath(config['moses_installation_dir'])
mert_perl = os.path.join(moses_install_dir, 'scripts', 'training', 'mert-moses.pl')
bin_dir = os.path.join(moses_install_dir, 'bin')
moses_bin = os.path.join(moses_install_dir, 'bin', 'moses')
src_file = infilename + '.' + config['src_lang']
ref_file = infilename + '.' + config['trg_lang']
logfile = os.path.join(workdir, 'log')
#change lm configuration in moses ini
moses_ini = os.path.join(workdir, 'trained-moses.ini')
cmd = r"cat %(orig_moses_ini)s | sed '/\[lmodel-file\]/,/^[[:space:]]*$/c\[lmodel-file\]\n%(lm_type)s 0 %(lm_order)s %(lm_file)s\n' > %(moses_ini)s"
cmd = cmd % locals()
os.system(cmd)
#the command
cmd = '%(mert_perl)s --maximum-iterations %(max_no_iters)d --mertdir %(bin_dir)s --working-dir %(workdir)s %(src_file)s %(ref_file)s %(moses_bin)s %(moses_ini)s 2> %(logfile)s'
cmd = cmd % locals()
pipe = subprocess.Popen(cmd, stdin = subprocess.PIPE, stdout = subprocess.PIPE, shell=True)
pipe.wait()
#check the moses ini
new_mosesini = os.path.join(workdir, 'moses.ini')
if not os.path.exists(new_mosesini):
raise Exception, 'Failed MERT'
return {'moses_ini_filename' : new_mosesini}
return process
if __name__ == '__main__':
def __test():
configuration = {'src_lang':'en',
'trg_lang':'lt',
'moses_installation_dir':os.path.abspath('../../../../'),
'mert_working_dir':'../../../../../tuning'}
values = {'development_data_filename':'../../../../../corpus/tune',
'moses_ini_file':'../../../../../model/model/moses.ini',
'trg_language_model_filename':'../../../../../corpus/train.lt.lm',
'trg_language_model_type':9,
'trg_language_model_order':4}
from pypeline.helpers.helpers import run_pipeline
box_config = configure(configuration)
box = initialise(configuration)
print run_pipeline(box, values, None)
#do some test
__test()

View File

@ -0,0 +1,103 @@
import os
import shutil
import subprocess
def get_name():
return 'model_training'
def get_inputs():
return ['src_filename', 'trg_filename']
def get_outputs():
return ['moses_ini_filename']
def get_configuration():
return ['source_language', 'target_language',
'moses_installation_dir', 'giza_installation_dir',
'translation_model_directory', 'alignment_method',
'reordering_method']
# Alignment = grow-diag-final-and
# Reordering = msd-bidirectional-fe
def configure(args):
result = {}
result['src_lang'] = args['source_language']
result['trg_lang'] = args['target_language']
result['moses_installation_dir'] = args['moses_installation_dir']
result['external_bin_dir'] = args['giza_installation_dir']
result['model_directory'] = args['translation_model_directory']
result['alignment'] = args['alignment_method']
result['reordering'] = args['reordering_method']
return result
def initialise(config):
def process(a, s):
get_corpora_name_fn = lambda fn: ".".join(os.path.basename(fn).split('.')[:-1])
src_filename = os.path.abspath(a['src_filename'])
trg_filename = os.path.abspath(a['trg_filename'])
src_corpora_name = get_corpora_name_fn(src_filename)
trg_corpora_name = get_corpora_name_fn(trg_filename)
if src_corpora_name != trg_corpora_name:
raise Exception, "Mismatch of source [%s] and target [%s] filename" % (src_filename, trg_filename)
infilename = os.path.abspath(os.path.join(os.path.dirname(src_filename), src_corpora_name))
workdir = os.path.abspath(config['model_directory'])
#simply call the training perl script
#remove the workdir if it is already there
if os.path.exists(workdir):
shutil.rmtree(workdir)
os.makedirs(workdir)
#local vars
train_model_perl = os.path.abspath(os.path.join(config['moses_installation_dir'],
'scripts',
'training',
'train-model.perl'))
src_lang = config['src_lang'].lower()
trg_lang = config['trg_lang'].lower()
external_bin = os.path.abspath(config['external_bin_dir'])
#create a dummy lm file
dummy_lmfile = os.path.join(workdir, 'dummy.lm')
f = open(dummy_lmfile, 'w')
print >> f, "dummy lm file"
f.close()
logfile = os.path.join(workdir, 'log')
#the command
alignment_method = config['alignment']
reordering_method = config['reordering']
cmd = '%(train_model_perl)s -root-dir %(workdir)s -corpus %(infilename)s ' \
'-f %(src_lang)s -e %(trg_lang)s -alignment %(alignment_method)s ' \
'-reordering %(reordering_method)s -lm 0:5:%(dummy_lmfile)s:0 ' \
'-external-bin-dir %(external_bin)s 2> %(logfile)s'
cmd = cmd % locals()
pipe = subprocess.Popen(cmd, stdin = subprocess.PIPE, stdout = subprocess.PIPE, shell=True)
pipe.wait()
# check the moses ini
mosesini = os.path.join(workdir, 'model', 'moses.ini')
if not os.path.exists(mosesini):
raise Exception, 'Failed training model'
return {'moses_ini_filename' : mosesini}
return process
if __name__ == '__main__':
def __test():
configuration = {'src_lang' : 'en',
'trg_lang' : 'lt',
'moses_installation_dir' : os.environ['MOSES_HOME'],
'giza_installation_dir' : os.environ['GIZA_HOME'],
'translation_model_directory' : 'model-dir'}
values = {'training_data_filename' : '/Users/ianjohnson/work/MTM-2012/corpus/training/cleantrain'}
from pypeline.helpers.helpers import run_pipeline
box_config = configure(configuration)
box = initialise(box_config)
print run_pipeline(box, values, None)
#do some test
__test()

View File

@ -0,0 +1,15 @@
CC = pclc.py
CFLAGS = -i
SOURCES = tokenizer.pcl
OBJS = $(SOURCES:.pcl=.py)
all: build
build: $(OBJS)
%.py: %.pcl
$(CC) $(CFLAGS) $<
clean:
rm -f *.py *.pyc *.log *~

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,7 @@
[Configuration]
corpus.language = en
working.directory.root = tokenised
moses.installation = /opt/moses
[Inputs]
corpus.filename = test_data/test.en

View File

@ -0,0 +1,38 @@
import pcl.io.file as file
import pcl.os.path as path
import pcl.system.process as process
import pcl.util.list as list
import pcl.util.string as string
component tokenizer
input corpus.filename
output corpus.tokenised.filename
configuration corpus.language, working.directory.root, moses.installation
do
language <- string.lower(@corpus.language)
corpus.file.basename <- path.basename(corpus.filename)
corpus.file.basename.bits <- string.split(corpus.file.basename, ".")
list.insert(corpus.file.basename.bits, -1, "tok")
result.basename <- string.join(corpus.file.basename.bits, ".")
result.pathname <- path.join(@working.directory.root, result.basename)
working.exists <- path.exists(@working.directory.root)
if working.exists == False then
path.makedirs(@working.directory.root)
return ()
else
return ()
endif
tokeniser.cmd <- path.join(@moses.installation, "scripts",
"tokenizer", "tokenizer.perl")
tokeniser.cmd.line <- list.cons(tokeniser.cmd, "-l", language, "-q")
corpus.file <- file.openFile(corpus.filename, "r")
result.file <- file.openFile(result.pathname, "w")
process.callAndCheck(tokeniser.cmd.line, corpus.file, result.file)
file.closeFile(result.file)
file.closeFile(corpus.file)
return corpus.tokenised.filename <- result.pathname

View File

@ -0,0 +1,21 @@
[Configuration]
source_language = en
target_language = lt
max_segment_length = 20
corpus_development_size = 1000
corpus_evaluation_size = 500
alignment_method = grow-diag-final-and
reordering_method = msd-bidirectional-fe
smoothing_method = improved-kneser-ney
tokenisation_directory = training/tokenisation
translation_model_directory = training/model
language_model_directory = training/lm
mert_directory = training/mert
mert_max_no_iterations = 10
moses_installation_directory = $(MOSES_HOME)
giza_installation_directory = $(GIZA_HOME)
irstlm_installation_directory = $(IRSTLM)
[Inputs]
src_filename = ../test_data/cleantrain.en
trg_filename = ../test_data/cleantrain.lt

View File

@ -0,0 +1,117 @@
#
# Import all of the components to be composed
#
import components.src_trg_tokeniser as tokeniser
import components.translation_model_training as model_training
import components.wrappers.irstlm_build.irstlm_build as lang_model
import components.wrappers.mert.mert as mert
#
# Component definition
#
# Config: {model_training.max_segment_length,
# model_training.corpus.[development_size|evaluation_size],
# model_training.[src|trg].language,
# model_training.method.[alignment|reordering], {moses_ini_filename,
# model_training.giza.installation, evaluation_data_filename}
# {src_filename, {tokenised_src_filename, model_training.translation_model.dir} |
# trg_filename} tokenised_trg_filename} +-----------------------------------------+ +-------+ | {moses_ini_filename}
# | +-------+ +-------+ +-------+ | +-------+ | tokenised_src_filename -> src_filename, | | Model | V +-------+ |
# V | +--->+ Src/ +--->+ | V | +-->+ tokenised_trg_filename -> trg_filename +-->+ Train +------>+ | +------+ V
# --->+ Split | | Trg | | Merge +--->+ Split | +-----------------------------------------+ +-------+ | Merge +----->+ MERT +--->
# | +--->+ Token +--->+ | | +--\ +------------------------------------------+ +--------+ | | ^ +------+
# +-------+ +-------+ +-------+ +-------+ \->+ tokenised_trg_filename -> input_filename +-->+ IRSTLM +-->+ | |
# Config: {tokeniser.[src|trg].language, +------------------------------------------+ +--------+ ^ +-------+ |
# tokeniser.[src|trg].tokeniser_dir Config: {irstlm_installation_dir::String, | |
# tokeniser.moses.installation} irstlm_smoothing_method::String, | |
# language_model_directory} | |
# | |
# {lm_filename, compiled_lm_filename, add_start_end_filename} |
# |
# {moses_ini_file, evaluation_data_filename, trg_language_model_filename,
# trg_language_model_order, trg_language_model_type}
#
component training_pipeline
inputs src_filename, trg_filename
output moses_ini_filename
configuration source_language,
target_language,
max_segment_length,
corpus_development_size,
corpus_evaluation_size,
alignment_method,
reordering_method,
smoothing_method,
tokenisation_directory,
translation_model_directory,
language_model_directory,
mert_directory,
mert_max_no_iterations,
moses_installation_directory,
giza_installation_directory,
irstlm_installation_directory
declare
tokeniser := new tokeniser with
source_language -> tokeniser.src.language,
target_language -> tokeniser.trg.language,
tokenisation_directory -> tokeniser.src.tokenisation_dir,
tokenisation_directory -> tokeniser.trg.tokenisation_dir,
moses_installation_directory -> tokeniser.moses.installation
model_training := new model_training with
max_segment_length -> model_training.max_segment_length,
corpus_development_size -> model_training.corpus.development_size,
corpus_evaluation_size -> model_training.corpus.evaluation_size,
translation_model_directory -> model_training.translation_model.dir,
alignment_method -> model_training.method.alignment,
reordering_method -> model_training.method.reordering,
source_language -> model_training.src.language,
moses_installation_directory -> model_training.moses.installation,
giza_installation_directory -> model_training.giza.installation,
target_language -> model_training.trg.language
irstlm := new lang_model with
irstlm_installation_directory -> irstlm_installation_dir,
smoothing_method -> irstlm_smoothing_method,
language_model_directory -> language_model_directory
mert := new mert with
source_language -> source_language,
target_language -> target_language,
moses_installation_directory -> moses_installation_dir,
mert_directory -> mert_working_directory,
mert_max_no_iterations -> mert_max_no_iterations
as
# Split and transform the input to the tokeniser component
# Inputs: src_filename, trg_filename
# Outputs: (tokenised_src_filename), (tokenised_trg_filename)
(wire src_filename -> src_filename,
trg_filename -> _ &&&
wire trg_filename -> trg_filename,
src_filename -> _) >>>
tokeniser >>>
# Merge output from tokeniser
# Inputs: (tokenised_src_filename), (tokenised_trg_filename)
# Outputs: tokenised_src_filename, tokenised_trg_filename
merge top[tokenised_src_filename] -> tokenised_src_filename,
bottom[tokenised_trg_filename] -> tokenised_trg_filename >>>
# Train the translation table and target language model
# Inputs: tokenised_src_filename, tokenised_trg_filename
# Outputs: (moses_ini_filename), ('add_start_end_filename', 'lm_filename', 'compiled_lm_filename')
((wire tokenised_src_filename -> src_filename,
tokenised_trg_filename -> trg_filename >>> model_training) &&&
(wire tokenised_trg_filename -> input_filename,
tokenised_src_filename -> _ >>> irstlm)) >>>
# Merge the output from the TT and LM training component
# Inputs: (moses_ini_filename, evaluation_data_filename),
# (compiled_lm_filename, add_start_end_filename, lm_filename)
# Outputs: moses_ini_filename, evaluation_data_filename, evaluation_data_filename,
# trg_language_model_filename, trg_language_model_order, trg_language_model_type
merge top[moses_ini_filename] -> moses_ini_filename,
top[evaluation_data_filename] -> evaluation_data_filename,
bottom[compiled_lm_filename] -> trg_language_model_filename,
bottom[add_start_end_filename] -> _,
bottom[lm_filename] -> _,
3 -> trg_language_model_order,
9 -> trg_language_model_type >>>
mert

@ -0,0 +1 @@
Subproject commit e33ae59b40a6e17fe60e436b3795f0bc559fa8b8

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,13 @@
with-re2 = [ option.get "with-re2" ] ;
if $(with-re2) {
lib re2 : : <search>$(with-re2)/lib ;
external-lib glib-2.0 ;
glib-cflags = [ _shell "pkg-config --cflags glib-2.0" ] ;
includes += <include>$(with-re2)/include ;
exe tokenizer : tokenizer.cpp tokenizer_main.cpp Parameters.cpp re2 glib-2.0 : <cflags>-std=c++0x <cflags>$(glib-cflags) $(includes) ;
}
else {
alias tokenizer ;
}

View File

@ -0,0 +1,39 @@
#include "Parameters.h"
#ifdef TOKENIZER_NAMESPACE
namespace TOKENIZER_NAMESPACE {
#endif
Parameters::Parameters()
: nthreads(0)
, chunksize(2000)
, cfg_path(0)
, verbose_p(false)
, detag_p(false)
, alltag_p(false)
, entities_p(false)
, escape_p(false)
, aggro_p(false)
, supersub_p(false)
, url_p(true)
, downcase_p(false)
, normalize_p(false)
, penn_p(false)
, words_p(false)
, denumber_p(false)
, narrow_latin_p(false)
, narrow_kana_p(false)
, refined_p(false)
, unescape_p(false)
, drop_bad_p(false)
, split_p(false)
, notokenization_p(false)
, para_marks_p(false)
, split_breaks_p(false)
{
}
#ifdef TOKENIZER_NAMESPACE
}
#endif

View File

@ -0,0 +1,51 @@
#pragma once
#include <string>
#include <vector>
#ifdef TOKENIZER_NAMESPACE
namespace TOKENIZER_NAMESPACE {
#endif
struct Parameters
{
std::string lang_iso;
std::vector<std::string> args;
std::string out_path;
int nthreads;
int chunksize;
const char *cfg_path;
bool verbose_p;
bool detag_p;
bool alltag_p;
bool entities_p;
bool escape_p;
bool aggro_p;
bool supersub_p;
bool url_p;
bool downcase_p;
bool normalize_p;
bool penn_p;
bool words_p;
bool denumber_p;
bool narrow_latin_p;
bool narrow_kana_p;
bool refined_p;
bool unescape_p;
bool drop_bad_p;
bool split_p;
bool notokenization_p;
bool para_marks_p;
bool split_breaks_p;
Parameters();
Parameters(const Parameters& _);
};
#ifdef TOKENIZER_NAMESPACE
}
#endif

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,205 @@
#include <string>
#include <iostream>
#include <cstdlib>
#include <fstream>
#include <sstream>
#include <unordered_map>
#include <set>
#include <vector>
#include <iterator>
#include <stdexcept>
#include <re2/re2.h>
#include <unistd.h>
#include "Parameters.h"
#ifdef TOKENIZER_NAMESPACE
namespace TOKENIZER_NAMESPACE {
#endif
//
// @about
// Tokenizer implements the process of Koehn's tokenizer.perl via RE2
//
class Tokenizer {
private:
typedef enum {
empty = 0,
blank,
upper, // upper case
letta, // extended word class (includes number, hyphen)
numba,
hyphn,
stops, // blank to stops are "extended word class" variants
quote, // init & fini = {',"}
pinit, // init (includes INVERT_*)
pfini, // fini
pfpct, // fini + pct
marks,
limit
} charclass_t;
std::size_t nthreads;
std::size_t chunksize;
std::string cfg_dir;
// non-breaking prefixes (numeric) utf8
std::set<std::string> nbpre_num_set;
// non-breaking prefixes (other) utf8
std::set<std::string> nbpre_gen_set;
// non-breaking prefixes (numeric) ucs4
std::set<std::wstring> nbpre_num_ucs4;
// non-breaking prefixes (other) ucs4
std::set<std::wstring> nbpre_gen_ucs4;
// compiled protected patterns
std::vector<re2::RE2 *> prot_pat_vec;
protected:
// language
std::string lang_iso;
bool english_p; // is lang_iso "en"
bool latin_p; // is lang_iso "fr" or "it"
bool skip_xml_p;
bool skip_alltags_p;
bool entities_p;
bool escape_p;
bool unescape_p;
bool aggressive_hyphen_p;
bool supersub_p;
bool url_p;
bool downcase_p;
bool normalize_p;
bool penn_p;
bool narrow_latin_p;
bool narrow_kana_p;
bool refined_p;
bool drop_bad_p;
bool splits_p;
bool verbose_p;
bool para_marks_p;
bool split_breaks_p;
// return counts of general and numeric prefixes loaded
std::pair<int,int> load_prefixes(std::ifstream& ifs); // used by init(), parameterized by lang_iso
// in-place 1 line tokenizer, replaces input string, depends on wrapper to set-up invariants
void protected_tokenize(std::string& inplace);
// used for boost::thread
struct VectorTokenizerCallable {
Tokenizer *tokenizer;
std::vector<std::string>& in;
std::vector<std::string>& out;
VectorTokenizerCallable(Tokenizer *_tokenizer,
std::vector<std::string>& _in,
std::vector<std::string>& _out)
: tokenizer(_tokenizer)
, in(_in)
, out(_out) {
};
void operator()() {
out.resize(in.size());
for (std::size_t ii = 0; ii < in.size(); ++ii)
if (in[ii].empty())
out[ii] = in[ii];
else if (tokenizer->penn_p)
out[ii] = tokenizer->penn_tokenize(in[ii]);
else
out[ii] = tokenizer->quik_tokenize(in[ii]);
};
};
public:
Tokenizer(); // UNIMPL
// no throw
Tokenizer(const Parameters& _params);
// frees dynamically compiled expressions
~Tokenizer();
// required before other methods, may throw
void init(const char *cfg_dir_path = 0);
void set_config_dir(const std::string& _cfg_dir);
// required after processing a contiguous sequence of lines when sentence splitting is on
void reset();
// simultaneous sentence splitting not yet implemented
bool splitting() const { return splits_p; }
// escapes chars the set &|"'<> after tokenization (moses special characters)
bool escape(std::string& inplace);
// used in detokenizer, converts entities into characters
// if escape_p is set, does not unescape moses special tokens, thus
// escape_p and unescape_p can be used together usefully
bool unescape(std::string& inplace);
// streaming select-tokenizer reads from is, writes to os, preserving line breaks (unless splitting)
std::size_t tokenize(std::istream& is, std::ostream& os);
// quik-tokenize padded line buffer to return string
std::string quik_tokenize(const std::string& buf);
// penn-tokenize padded line buffer to return string // untested
std::string penn_tokenize(const std::string& buf);
// select-tokenize padded line buffer to return string
std::string tokenize(const std::string& buf) {
return penn_p ? penn_tokenize(buf) : quik_tokenize(buf);
}
// tokenize with output argument
void tokenize(const std::string& buf, std::string& outs) {
outs = tokenize(buf);
}
// tokenize to a vector
std::vector<std::string> tokens(const std::string& in) {
std::istringstream tokss(penn_p ? penn_tokenize(in) : tokenize(in));
std::vector<std::string> outv;
std::copy(std::istream_iterator<std::string>(tokss),
std::istream_iterator<std::string>(),
std::back_inserter(outv));
return outv;
}
// streaming detokenizer reads from is, writes to os, preserving breaks
std::size_t detokenize(std::istream& is, std::ostream &os);
// detokenize padded line buffer to return string
std::string detokenize(const std::string& buf);
void detokenize(const std::string& buf, std::string& outs) {
outs = detokenize(buf);
}
// detokenize from a vector
std::string detokenize(const std::vector<std::string>& inv) {
std::ostringstream oss;
std::copy(inv.begin(), inv.end(), std::ostream_iterator<std::string>(oss," "));
return detokenize(oss.str());
}
// split a string on sentence boundaries (approximately)
std::vector<std::string> splitter(const std::string &istr,bool *continuation_p = 0);
// split sentences from input stream and write one per line on output stream
std::pair<std::size_t,std::size_t> splitter(std::istream& is, std::ostream& os);
}; // end class Tokenizer
#ifdef TOKENIZER_NAMESPACE
};
#endif

View File

@ -0,0 +1,352 @@
#include "tokenizer.h"
#include "Parameters.h"
#include <memory>
#include <vector>
#include <cctype>
#include <cstring>
#ifdef TOKENIZER_NAMESPACE
using namespace TOKENIZER_NAMESPACE ;
#endif
void
usage(const char *path)
{
std::cerr << "Usage: " << path << "[-{v|x|p|a|e|s|u|n|N]* [LL] [-{c|o} PATH]* INFILE*" << std::endl;
std::cerr << " -a -- aggressive hyphenization" << std::endl;
std::cerr << " -b -- drop bad bytes" << std::endl;
std::cerr << " -B -- splitter will split on linebreak" << std::endl;
std::cerr << " -c DIR -- config (pattern) file directory" << std::endl;
std::cerr << " -d -- downcase" << std::endl;
std::cerr << " -D -- detokenize" << std::endl;
std::cerr << " -e -- do not escape entities during tokenization" << std::endl;
std::cerr << " -E -- preserve entities during tokenization" << std::endl;
std::cerr << " -k -- narrow kana" << std::endl;
std::cerr << " -n -- narrow latin" << std::endl;
std::cerr << " -N -- normalize" << std::endl;
std::cerr << " -o OUT -- output file path" << std::endl;
std::cerr << " -p -- penn treebank style" << std::endl;
std::cerr << " -r -- refined contraction and quantity conjoining" << std::endl;
std::cerr << " -s -- super- and sub-script conjoining" << std::endl;
std::cerr << " -S -- buffer and sentence-split lines" << std::endl;
std::cerr << " -T -- do not tokenize, just split, no <P> marks" << std::endl;
std::cerr << " -t N[,C] -- use N threads (1), chunksize C lines" << std::endl;
std::cerr << " -u -- disable url handling" << std::endl;
std::cerr << " -U -- unescape entities before tokenization, after detokenization" << std::endl;
std::cerr << " -v -- verbose" << std::endl;
std::cerr << " -w -- word filter" << std::endl;
std::cerr << " -x -- skip xml tag lines" << std::endl;
std::cerr << " -y -- skip all xml tags" << std::endl;
std::cerr << " -X -- split only, with <P> marks" << std::endl;
std::cerr << "Default is -c ., stdin, stdout." << std::endl;
std::cerr << "LL in en,fr,it affect contraction. LL selects nonbreaking prefix file" << std::endl;
std::cerr << "nonbreaking_prefix.LL is sought in getenv('TOKENIZER_SHARED_DIR')." << std::endl;
return;
}
std::string token_word(const std::string& in) {
int pos = -1;
int digits_prefixed = 0;
int nalpha = 0;
int len = in.size();
std::vector<char> cv;
int last_quirk = -1;
while (++pos < len) {
char ch = in.at(pos);
if (std::isdigit(ch)) {
if (digits_prefixed > 0) {
last_quirk = pos;
break;
}
digits_prefixed--;
cv.push_back(std::tolower(ch));
} else if (std::isalpha(ch)) {
if (digits_prefixed < 0)
digits_prefixed = -digits_prefixed;
cv.push_back(std::tolower(ch));
nalpha++;
} else {
if (digits_prefixed < 0)
digits_prefixed = -digits_prefixed;
last_quirk = pos;
if ((ch == '-' || ch == '\'') && pos != 0) {
cv.push_back(ch);
} else {
break;
}
}
}
if (last_quirk == pos || (digits_prefixed > 0 && nalpha == 0))
cv.clear(); // invalid word
return std::string(cv.begin(),cv.end());
}
int
copy_words(Tokenizer& tize, std::istream& ifs, std::ostream& ofs) {
int nlines = 0;
std::string line;
while (ifs.good() && std::getline(ifs,line)) {
if (line.empty())
continue;
std::vector<std::string> tokens(tize.tokens(line));
int count = 0;
bool was_break = false;
for (auto& token: tokens) {
if (token.empty()) {
if (count || was_break) {
ofs << std::endl;
count = 0;
nlines++;
was_break = true;
continue;
}
}
was_break = false;
std::string word(token_word(token));
if (word.empty()) {
continue;
}
if (count++) {
ofs << ' ';
}
ofs << word;
}
if (count) {
ofs << std::endl;
nlines++;
}
}
return nlines;
}
int main(int ac, char **av)
{
int rc = 0;
Parameters params;
const char *prog = av[0];
bool next_cfg_p = false;
bool next_output_p = false;
bool next_threads_p = false;
bool detokenize_p = std::strstr(av[0],"detokenize") != 0;
if (!detokenize_p)
params.split_p = std::strstr(av[0],"splitter") != 0;
while (++av,--ac) {
if (**av == '-') {
switch (av[0][1]) {
case 'a':
params.aggro_p = true;
break;
case 'b':
params.drop_bad_p = true;
break;
case 'B':
params.split_breaks_p = true;
break;
case 'c':
next_cfg_p = true;
break;
case 'd':
params.downcase_p = true;
break;
case 'D':
detokenize_p = !detokenize_p;
break;
case 'e':
params.escape_p = !params.escape_p;
break;
case 'E':
params.entities_p = true;
break;
case 'h':
usage(prog);
exit(0);
case 'k':
params.narrow_kana_p = true;
break;
case 'n':
params.narrow_latin_p = true;
break;
case 'N':
params.normalize_p = true;
break;
case 'o':
next_output_p = true;
break;
case 'p':
params.penn_p = true;
break;
case 'r':
params.refined_p = true;
break;
case 's':
params.supersub_p = true;
break;
case 'S':
params.split_p = !params.split_p;
break;
case 'T':
params.notokenization_p = true;
params.para_marks_p = false;
break;
case 't':
next_threads_p = true;
break;
case 'U':
params.unescape_p = true;
break;
case 'u':
params.url_p = false;
break;
case 'v':
params.verbose_p = true;
break;
case 'w':
params.words_p = true;
break;
case 'x':
params.detag_p = true;
break;
case 'X':
params.notokenization_p = true;
params.para_marks_p = true;
break;
case 'y':
params.alltag_p = true;
break;
case 'l':
// ignored
break;
default:
std::cerr << "Unknown option: " << *av << std::endl;
::exit(1);
}
} else if (params.lang_iso.empty() && strlen(*av) == 2 && !isdigit(**av)) {
params.lang_iso = *av;
} else if (next_output_p) {
next_output_p = false;
params.out_path = *av;
} else if (next_cfg_p) {
next_cfg_p = false;
params.cfg_path = *av;
} else if (next_threads_p) {
next_threads_p = false;
char *comma = strchr(*av,',');
if (comma) {
*comma++ = 0;
params.chunksize = std::strtoul(comma,0,0);
}
params.nthreads = std::strtoul(*av,0,0);
} else {
params.args.push_back(std::string(*av));
}
}
if (!params.cfg_path) {
params.cfg_path = getenv("TOKENIZER_SHARED_DIR");
}
if (!params.cfg_path) {
if (!::access("../share/.",X_OK)) {
if (!::access("../share/moses/.",X_OK)) {
params.cfg_path = "../share/moses";
} else {
params.cfg_path = "../share";
}
} else if (!::access("./scripts/share/.",X_OK)) {
params.cfg_path = "./scripts/share";
} else if (!::access("./nonbreaking_prefix.en",R_OK)) {
params.cfg_path = ".";
} else {
const char *slash = std::strrchr(prog,'/');
if (slash) {
std::string cfg_dir_str(prog,slash-prog);
std::string cfg_shr_str(cfg_dir_str);
cfg_shr_str.append("/shared");
std::string cfg_mos_str(cfg_shr_str);
cfg_mos_str.append("/moses");
if (!::access(cfg_mos_str.c_str(),X_OK)) {
params.cfg_path = strdup(cfg_mos_str.c_str());
} else if (!::access(cfg_shr_str.c_str(),X_OK)) {
params.cfg_path = strdup(cfg_shr_str.c_str());
} else if (!::access(cfg_dir_str.c_str(),X_OK)) {
params.cfg_path = strdup(cfg_dir_str.c_str());
}
}
}
}
if (params.cfg_path) {
if (params.verbose_p) {
std::cerr << "config path: " << params.cfg_path << std::endl;
}
}
std::unique_ptr<std::ofstream> pofs = 0;
if (!params.out_path.empty()) {
pofs.reset(new std::ofstream(params.out_path.c_str()));
}
std::ostream& ofs(pofs ? *pofs : std::cout);
if (params.lang_iso.empty())
params.lang_iso = "en";
Tokenizer tize(params);
tize.init();
std::pair<std::size_t,std::size_t> plines = { 0, 0 };
if (params.words_p) {
if (params.args.empty()) {
plines.first += copy_words(tize,std::cin,ofs);
} else {
for (std::string& arg : params.args) {
try {
std::ifstream ifs(arg.c_str());
plines.first += copy_words(tize,ifs,ofs);
} catch (...) {
std::cerr << "Exception extracting words from path " << arg << std::endl;
}
}
}
} else if (params.args.empty()) {
if (detokenize_p) {
plines.first = tize.detokenize(std::cin,ofs);
} else if (params.notokenization_p) {
plines = tize.splitter(std::cin,ofs);
} else {
plines.first = tize.tokenize(std::cin,ofs);
}
} else {
for (std::string& arg : params.args) {
try {
std::ifstream ifs(arg.c_str());
if (detokenize_p) {
plines.first = tize.detokenize(ifs,ofs);
} else if (params.notokenization_p) {
plines = tize.splitter(ifs,ofs);
} else {
plines.first = tize.tokenize(ifs,ofs);
}
} catch (...) {
std::cerr << "Exception tokenizing from path " << arg << std::endl;
}
}
}
if (params.verbose_p) {
std::cerr << "%%% " << plines.first << " lines." << std::endl;
if (plines.second) {
std::cerr << "%%% " << plines.second << " sentences." << std::endl;
}
}
return rc;
}

Some files were not shown because too many files have changed in this diff Show More