Merge pull request #1 from moses-smt/master

update my local fork
This commit is contained in:
Tomáš Fulajtár 2015-10-12 18:37:26 +02:00
commit 83e25a3f5e
138 changed files with 4489 additions and 948 deletions

1
.gitignore vendored
View File

@ -84,3 +84,4 @@ mingw/MosesGUI/_eric4project/
contrib/m4m/merge-sorted
mert/hgdecode
.bash_history*

3
.gitmodules vendored
View File

@ -4,3 +4,6 @@
[submodule "contrib/omtc/omtc"]
path = contrib/omtc/omtc
url = https://github.com/ianj-als/omtc.git
[submodule "mmt"]
path = mmt
url = https://github.com/modernmt/moses-submodule

460
COPYING Normal file
View File

@ -0,0 +1,460 @@
GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
[This is the first released version of the Lesser GPL. It also counts
as the successor of the GNU Library Public License, version 2, hence
the version number 2.1.]
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
Licenses are intended to guarantee your freedom to share and change
free software--to make sure the software is free for all its users.
This license, the Lesser General Public License, applies to some
specially designated software packages--typically libraries--of the
Free Software Foundation and other authors who decide to use it. You
can use it too, but we suggest you first think carefully about whether
this license or the ordinary General Public License is the better
strategy to use in any particular case, based on the explanations
below.
When we speak of free software, we are referring to freedom of use,
not price. Our General Public Licenses are designed to make sure that
you have the freedom to distribute copies of free software (and charge
for this service if you wish); that you receive source code or can get
it if you want it; that you can change the software and use pieces of
it in new free programs; and that you are informed that you can do
these things.
To protect your rights, we need to make restrictions that forbid
distributors to deny you these rights or to ask you to surrender these
rights. These restrictions translate to certain responsibilities for
you if you distribute copies of the library or if you modify it.
For example, if you distribute copies of the library, whether gratis
or for a fee, you must give the recipients all the rights that we gave
you. You must make sure that they, too, receive or can get the source
code. If you link other code with the library, you must provide
complete object files to the recipients, so that they can relink them
with the library after making changes to the library and recompiling
it. And you must show them these terms so they know their rights.
We protect your rights with a two-step method: (1) we copyright the
library, and (2) we offer you this license, which gives you legal
permission to copy, distribute and/or modify the library.
To protect each distributor, we want to make it very clear that
there is no warranty for the free library. Also, if the library is
modified by someone else and passed on, the recipients should know
that what they have is not the original version, so that the original
author's reputation will not be affected by problems that might be
introduced by others.
Finally, software patents pose a constant threat to the existence of
any free program. We wish to make sure that a company cannot
effectively restrict the users of a free program by obtaining a
restrictive license from a patent holder. Therefore, we insist that
any patent license obtained for a version of the library must be
consistent with the full freedom of use specified in this license.
Most GNU software, including some libraries, is covered by the
ordinary GNU General Public License. This license, the GNU Lesser
General Public License, applies to certain designated libraries, and
is quite different from the ordinary General Public License. We use
this license for certain libraries in order to permit linking those
libraries into non-free programs.
When a program is linked with a library, whether statically or using
a shared library, the combination of the two is legally speaking a
combined work, a derivative of the original library. The ordinary
General Public License therefore permits such linking only if the
entire combination fits its criteria of freedom. The Lesser General
Public License permits more lax criteria for linking other code with
the library.
We call this license the "Lesser" General Public License because it
does Less to protect the user's freedom than the ordinary General
Public License. It also provides other free software developers Less
of an advantage over competing non-free programs. These disadvantages
are the reason we use the ordinary General Public License for many
libraries. However, the Lesser license provides advantages in certain
special circumstances.
For example, on rare occasions, there may be a special need to
encourage the widest possible use of a certain library, so that it
becomes a de-facto standard. To achieve this, non-free programs must
be allowed to use the library. A more frequent case is that a free
library does the same job as widely used non-free libraries. In this
case, there is little to gain by limiting the free library to free
software only, so we use the Lesser General Public License.
In other cases, permission to use a particular library in non-free
programs enables a greater number of people to use a large body of
free software. For example, permission to use the GNU C Library in
non-free programs enables many more people to use the whole GNU
operating system, as well as its variant, the GNU/Linux operating
system.
Although the Lesser General Public License is Less protective of the
users' freedom, it does ensure that the user of a program that is
linked with the Library has the freedom and the wherewithal to run
that program using a modified version of the Library.
The precise terms and conditions for copying, distribution and
modification follow. Pay close attention to the difference between a
"work based on the library" and a "work that uses the library". The
former contains code derived from the library, whereas the latter must
be combined with the library in order to run.
GNU LESSER GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any software library or other
program which contains a notice placed by the copyright holder or
other authorized party saying it may be distributed under the terms of
this Lesser General Public License (also called "this License").
Each licensee is addressed as "you".
A "library" means a collection of software functions and/or data
prepared so as to be conveniently linked with application programs
(which use some of those functions and data) to form executables.
The "Library", below, refers to any such software library or work
which has been distributed under these terms. A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language. (Hereinafter, translation is
included without limitation in the term "modification".)
"Source code" for a work means the preferred form of the work for
making modifications to it. For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control
compilation and installation of the library.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does
and what the program that uses the Library does.
1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the
Library.
You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a
fee.
2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) The modified work must itself be a software library.
b) You must cause the files modified to carry prominent notices
stating that you changed the files and the date of any change.
c) You must cause the whole of the work to be licensed at no
charge to all third parties under the terms of this License.
d) If a facility in the modified Library refers to a function or a
table of data to be supplied by an application program that uses
the facility, other than as an argument passed when the facility
is invoked, then you must make a good faith effort to ensure that,
in the event an application does not supply such function or
table, the facility still operates, and performs whatever part of
its purpose remains meaningful.
(For example, a function in a library to compute square roots has
a purpose that is entirely well-defined independent of the
application. Therefore, Subsection 2d requires that any
application-supplied function or table used by this function must
be optional: if the application does not supply it, the square
root function must still compute square roots.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Library,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Library, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote
it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Library.
In addition, mere aggregation of another work not based on the Library
with the Library (or with a work based on the Library) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library. To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License. (If a newer version than version 2 of the
ordinary GNU General Public License has appeared, then you can specify
that version instead if you wish.) Do not make any other change in
these notices.
Once this change is made in a given copy, it is irreversible for
that copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.
This option is useful when you wish to copy part of the code of
the Library into a program that is not a library.
4. You may copy and distribute the Library (or a portion or
derivative of it, under Section 2) in object code or executable form
under the terms of Sections 1 and 2 above provided that you accompany
it with the complete corresponding machine-readable source code, which
must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.
If distribution of object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the
source code from the same place satisfies the requirement to
distribute the source code, even though third parties are not
compelled to copy the source along with the object code.
5. A program that contains no derivative of any portion of the
Library, but is designed to work with the Library by being compiled or
linked with it, is called a "work that uses the Library". Such a
work, in isolation, is not a derivative work of the Library, and
therefore falls outside the scope of this License.
However, linking a "work that uses the Library" with the Library
creates an executable that is a derivative of the Library (because it
contains portions of the Library), rather than a "work that uses the
library". The executable is therefore covered by this License.
Section 6 states terms for distribution of such executables.
When a "work that uses the Library" uses material from a header file
that is part of the Library, the object code for the work may be a
derivative work of the Library even though the source code is not.
Whether this is true is especially significant if the work can be
linked without the Library, or if the work is itself a library. The
threshold for this to be true is not precisely defined by law.
If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and small inline
functions (ten lines or less in length), then the use of the object
file is unrestricted, regardless of whether it is legally a derivative
work. (Executables containing this object code plus portions of the
Library will still fall under Section 6.)
Otherwise, if the work is a derivative of the Library, you may
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.
6. As an exception to the Sections above, you may also combine or
link a "work that uses the Library" with the Library to produce a
work containing portions of the Library, and distribute that work
under terms of your choice, provided that the terms permit
modification of the work for the customer's own use and reverse
engineering for debugging such modifications.
You must give prominent notice with each copy of the work that the
Library is used in it and that the Library and its use are covered by
this License. You must supply a copy of this License. If the work
during execution displays copyright notices, you must include the
copyright notice for the Library among them, as well as a reference
directing the user to the copy of this License. Also, you must do one
of these things:
a) Accompany the work with the complete corresponding
machine-readable source code for the Library including whatever
changes were used in the work (which must be distributed under
Sections 1 and 2 above); and, if the work is an executable linked
with the Library, with the complete machine-readable "work that
uses the Library", as object code and/or source code, so that the
user can modify the Library and then relink to produce a modified
executable containing the modified Library. (It is understood
that the user who changes the contents of definitions files in the
Library will not necessarily be able to recompile the application
to use the modified definitions.)
b) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (1) uses at run time a
copy of the library already present on the user's computer system,
rather than copying library functions into the executable, and (2)
will operate properly with a modified version of the library, if
the user installs one, as long as the modified version is
interface-compatible with the version that the work was made with.
c) Accompany the work with a written offer, valid for at least
three years, to give the same user the materials specified in
Subsection 6a, above, for a charge no more than the cost of
performing this distribution.
d) If distribution of the work is made by offering access to copy
from a designated place, offer equivalent access to copy the above
specified materials from the same place.
e) Verify that the user has already received a copy of these
materials or that you have already sent this user a copy.
For an executable, the required form of the "work that uses the
Library" must include any data and utility programs needed for
reproducing the executable from it. However, as a special exception,
the materials to be distributed need not include anything that is
normally distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on
which the executable runs, unless that component itself accompanies
the executable.
It may happen that this requirement contradicts the license
restrictions of other proprietary libraries that do not normally
accompany the operating system. Such a contradiction means you cannot
use both them and the Library together in an executable that you
distribute.
7. You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
library, provided that the separate distribution of the work based on
the Library and of the other library facilities is otherwise
permitted, and provided that you do these two things:
a) Accompany the combined library with a copy of the same work
based on the Library, uncombined with any other library
facilities. This must be distributed under the terms of the
Sections above.
b) Give prominent notice with the combined library of the fact
that part of it is a work based on the Library, and explaining
where to find the accompanying uncombined form of the same work.
8. You may not copy, modify, sublicense, link with, or distribute
the Library except as expressly provided under this License. Any
attempt otherwise to copy, modify, sublicense, link with, or
distribute the Library is void, and will automatically terminate your
rights under this License. However, parties who have received copies,
or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.
9. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Library or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Library (or any work based on the
Library), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Library or works based on it.
10. Each time you redistribute the Library (or any work based on the
Library), the recipient automatically receives a license from the
original licensor to copy, distribute, link with or modify the Library
subject to these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties with
this License.
11. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Library at all. For example, if a patent
license would not permit royalty-free redistribution of the Library by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Library.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply, and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
12. If the distribution and/or use of the Library is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Library under this License
may add an explicit geographical distribution limitation excluding those
countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
13. The Free Software Foundation may publish revised and/or new
versions of the Lesser General Public License from time to time.
Such new versions will be similar in spirit to the present version,
but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library
specifies a version number of this License which applies to it and
"any later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation. If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.
14. If you wish to incorporate parts of the Library into other free
programs whose distribution conditions are incompatible with these,
write to the author to ask for permission. For software which is
copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this. Our
decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.

22
Jamroot
View File

@ -17,6 +17,9 @@
#Note that, like language models, this is the --prefix where the library was
#installed, not some executable within the library.
#
#--no-xmlrpc-c
# Don't use xmlrpc-c library, even if it exists. Don't build moses server
#
#Compact phrase table and compact lexical reordering table
#--with-cmph=/path/to/cmph
#
@ -51,7 +54,7 @@
# --static forces static linking (the default will fall
# back to shared)
#
# debug-symbols=on|off include (default) or exclude debugging
# debug-symbols=on|off include or exclude (default) debugging
# information also known as -g
# --notrace compiles without TRACE macros
#
@ -143,6 +146,13 @@ if [ option.get "debug-build" : : "yes" ] {
echo "Building with -Og to enable easier profiling and debugging. Only available on gcc 4.8+." ;
}
if [ option.get "with-address-sanitizer" : : "yes" ] {
requirements += <cxxflags>-fsanitize=address ;
requirements += <cxxflags>-fno-omit-frame-pointer ;
requirements += <linkflags>-fsanitize=address ;
echo "Building with AddressSanitizer to enable debugging of memory errors. Only available on gcc 4.8+." ;
}
if [ option.get "enable-mpi" : : "yes" ] {
import mpi ;
using mpi ;
@ -154,6 +164,12 @@ if [ option.get "enable-mpi" : : "yes" ] {
requirements += <library>boost_serialization ;
}
mmt = [ option.get "mmt" ] ;
if $(mmt) {
requirements += <define>MMT ;
requirements += <include>$(mmt) ;
}
requirements += [ option.get "notrace" : <define>TRACE_ENABLE=1 ] ;
requirements += [ option.get "enable-boost-pool" : : <define>USE_BOOST_POOL ] ;
requirements += [ option.get "with-mm" : : <define>PT_UG ] ;
@ -198,7 +214,7 @@ if [ option.get "with-vw" ] {
project : default-build
<threading>multi
<warnings>on
<debug-symbols>on
<debug-symbols>off
<variant>release
<link>static
;
@ -282,6 +298,8 @@ contrib/server//mosesserver
mm
rephraser
contrib/c++tokenizer//tokenizer
contrib/expected-bleu-training//train-expected-bleu
contrib/expected-bleu-training//prepare-expected-bleu-training
;

View File

@ -0,0 +1,223 @@
/*
Moses - statistical machine translation system
Copyright (C) 2005-2015 University of Edinburgh
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "ExpectedBleuOptimizer.h"
namespace ExpectedBleuTraining
{
void ExpectedBleuOptimizer::AddTrainingInstance(const size_t nBestSizeCount,
const std::vector<float>& sBleu,
const std::vector<double>& overallScoreUntransformed,
const std::vector< boost::unordered_map<size_t, float> > &sparseScore,
bool maintainUpdateSet)
{
// compute xBLEU
double sumUntransformedScores = 0.0;
for (std::vector<double>::const_iterator overallScoreUntransformedIt=overallScoreUntransformed.begin();
overallScoreUntransformedIt!=overallScoreUntransformed.end(); ++overallScoreUntransformedIt)
{
sumUntransformedScores += *overallScoreUntransformedIt;
}
double xBleu = 0.0;
assert(nBestSizeCount == overallScoreUntransformed.size());
std::vector<double> p;
for (size_t i=0; i<nBestSizeCount; ++i)
{
if (sumUntransformedScores != 0) {
p.push_back( overallScoreUntransformed[i] / sumUntransformedScores );
} else {
p.push_back( 0 );
}
xBleu += p.back() * sBleu[ i ];
}
for (size_t i=0; i<nBestSizeCount; ++i)
{
double D = sBleu[ i ] - xBleu;
for (boost::unordered_map<size_t, float>::const_iterator sparseScoreIt=sparseScore[i].begin();
sparseScoreIt!=sparseScore[i].end(); ++sparseScoreIt)
{
const size_t name = sparseScoreIt->first;
float N = sparseScoreIt->second;
if ( std::fpclassify( p[i] * N * D ) == FP_SUBNORMAL )
{
m_err << "Error: encountered subnormal value: p[i] * N * D= " << p[i] * N * D
<< " with p[i]= " << p[i] << " N= " << N << " D= " << D << '\n';
m_err.flush();
exit(1);
} else {
m_gradient[name] += p[i] * N * D;
if ( maintainUpdateSet )
{
m_updateSet.insert(name);
}
}
}
}
m_xBleu += xBleu;
}
void ExpectedBleuOptimizer::InitSGD(const std::vector<float>& sparseScalingFactor)
{
const size_t nFeatures = sparseScalingFactor.size();
memcpy(&m_previousSparseScalingFactor.at(0), &sparseScalingFactor.at(0), nFeatures);
m_gradient.resize(nFeatures);
}
float ExpectedBleuOptimizer::UpdateSGD(std::vector<float>& sparseScalingFactor,
size_t batchSize,
bool useUpdateSet)
{
float xBleu = m_xBleu / batchSize;
// update sparse scaling factors
if (useUpdateSet) {
for (std::set<size_t>::const_iterator it = m_updateSet.begin(); it != m_updateSet.end(); ++it)
{
size_t name = *it;
UpdateSingleScalingFactorSGD(name, sparseScalingFactor, batchSize);
}
m_updateSet.clear();
} else {
for (size_t name=0; name<sparseScalingFactor.size(); ++name)
{
UpdateSingleScalingFactorSGD(name, sparseScalingFactor, batchSize);
}
}
m_xBleu = 0;
m_gradient.clear();
return xBleu;
}
void ExpectedBleuOptimizer::UpdateSingleScalingFactorSGD(size_t name,
std::vector<float>& sparseScalingFactor,
size_t batchSize)
{
// regularization
if ( m_regularizationParameter != 0 )
{
m_gradient[name] = m_gradient[name] / m_xBleu - m_regularizationParameter * 2 * sparseScalingFactor[name];
} else {
// need to normalize by dividing by batchSize
m_gradient[name] /= batchSize;
}
// the actual update
sparseScalingFactor[name] += m_learningRate * m_gradient[name];
// discard scaling factors below a threshold
if ( fabs(sparseScalingFactor[name]) < m_floorAbsScalingFactor )
{
sparseScalingFactor[name] = 0;
}
}
void ExpectedBleuOptimizer::InitRPROP(const std::vector<float>& sparseScalingFactor)
{
const size_t nFeatures = sparseScalingFactor.size();
m_previousSparseScalingFactor.resize(nFeatures);
memcpy(&m_previousSparseScalingFactor.at(0), &sparseScalingFactor.at(0), nFeatures);
m_previousGradient.resize(nFeatures);
m_gradient.resize(nFeatures);
m_stepSize.resize(nFeatures, m_initialStepSize);
}
float ExpectedBleuOptimizer::UpdateRPROP(std::vector<float>& sparseScalingFactor,
const size_t batchSize)
{
float xBleu = m_xBleu / batchSize;
// update sparse scaling factors
for (size_t name=0; name<sparseScalingFactor.size(); ++name)
{
// Sum of gradients. All we need is the sign. Don't need to normalize by dividing by batchSize.
// regularization
if ( m_regularizationParameter != 0 )
{
m_gradient[name] = m_gradient[name] / m_xBleu - m_regularizationParameter * 2 * sparseScalingFactor[name];
}
// step size
int sign = Sign(m_gradient[name]) * Sign(m_previousGradient[name]);
if (sign > 0) {
m_stepSize[name] *= m_increaseRate;
} else if (sign < 0) {
m_stepSize[name] *= m_decreaseRate;
}
if (m_stepSize[name] < m_minStepSize) {
m_stepSize[name] = m_minStepSize;
}
if (m_stepSize[name] > m_maxStepSize) {
m_stepSize[name] = m_maxStepSize;
}
// the actual update
m_previousGradient[name] = m_gradient[name];
if (sign >= 0) {
if (m_gradient[name] > 0) {
m_previousSparseScalingFactor[name] = sparseScalingFactor[name];
sparseScalingFactor[name] += m_stepSize[name];
} else if (m_gradient[name] < 0) {
m_previousSparseScalingFactor[name] = sparseScalingFactor[name];
sparseScalingFactor[name] -= m_stepSize[name];
}
} else {
sparseScalingFactor[name] = m_previousSparseScalingFactor[name];
// m_previousGradient[name] = 0;
}
// discard scaling factors below a threshold
if ( fabs(sparseScalingFactor[name]) < m_floorAbsScalingFactor )
{
sparseScalingFactor[name] = 0;
}
}
m_xBleu = 0;
m_gradient.clear();
return xBleu;
}
}

View File

@ -0,0 +1,117 @@
/*
Moses - statistical machine translation system
Copyright (C) 2005-2015 University of Edinburgh
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#pragma once
#include <vector>
#include <set>
#include <boost/unordered_map.hpp>
#include "util/file_stream.hh"
namespace ExpectedBleuTraining
{
class ExpectedBleuOptimizer
{
public:
ExpectedBleuOptimizer(util::FileStream& err,
float learningRate=1,
float initialStepSize=0.001,
float decreaseRate=0.5,
float increaseRate=1.2,
float minStepSize=1e-7,
float maxStepSize=1,
float floorAbsScalingFactor=0,
float regularizationParameter=0)
: m_err(err)
, m_learningRate(learningRate)
, m_initialStepSize(initialStepSize)
, m_decreaseRate(decreaseRate)
, m_increaseRate(increaseRate)
, m_minStepSize(minStepSize)
, m_maxStepSize(maxStepSize)
, m_floorAbsScalingFactor(floorAbsScalingFactor)
, m_regularizationParameter(regularizationParameter)
, m_xBleu(0)
{ }
void AddTrainingInstance(const size_t nBestSizeCount,
const std::vector<float>& sBleu,
const std::vector<double>& overallScoreUntransformed,
const std::vector< boost::unordered_map<size_t, float> > &sparseScore,
bool maintainUpdateSet = false);
void InitSGD(const std::vector<float>& sparseScalingFactor);
float UpdateSGD(std::vector<float>& sparseScalingFactor,
size_t batchSize,
bool useUpdateSet = false);
void InitRPROP(const std::vector<float>& sparseScalingFactor);
float UpdateRPROP(std::vector<float>& sparseScalingFactor,
const size_t batchSize);
protected:
util::FileStream& m_err;
// for SGD
const float m_learningRate;
// for RPROP
const float m_initialStepSize;
const float m_decreaseRate;
const float m_increaseRate;
const float m_minStepSize;
const float m_maxStepSize;
std::vector<float> m_previousSparseScalingFactor;
std::vector<float> m_previousGradient;
std::vector<float> m_gradient;
std::vector<float> m_stepSize;
// other
const float m_floorAbsScalingFactor;
const float m_regularizationParameter;
double m_xBleu;
std::set<size_t> m_updateSet;
void UpdateSingleScalingFactorSGD(size_t name,
std::vector<float>& sparseScalingFactor,
size_t batchSize);
inline int Sign(double x)
{
if (x > 0) return 1;
if (x < 0) return -1;
return 0;
}
};
}

View File

@ -0,0 +1,2 @@
exe prepare-expected-bleu-training : PrepareExpectedBleuTraining.cpp ../../util//kenutil ;
exe train-expected-bleu : TrainExpectedBleu.cpp ExpectedBleuOptimizer.cpp ../../util//kenutil ;

View File

@ -0,0 +1,222 @@
/*
Moses - statistical machine translation system
Copyright (C) 2005-2015 University of Edinburgh
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <vector>
#include <string>
#include <sstream>
#include <boost/algorithm/string/predicate.hpp>
#include <boost/unordered_map.hpp>
#include <boost/unordered_set.hpp>
#include <boost/program_options.hpp>
#include "util/file_stream.hh"
#include "util/file.hh"
#include "util/file_piece.hh"
#include "util/string_piece.hh"
#include "util/tokenize_piece.hh"
namespace po = boost::program_options;
int main(int argc, char **argv)
{
util::FileStream err(2);
std::string filenameNBestListIn, filenameFeatureNamesOut, filenameIgnoreFeatureNames;
size_t maxNBestSize;
try {
po::options_description descr("Usage");
descr.add_options()
("help,h", "produce help message")
("n-best-list,n", po::value<std::string>(&filenameNBestListIn)->required(),
"input n-best list file")
("write-feature-names-file,f", po::value<std::string>(&filenameFeatureNamesOut)->required(),
"output file for mapping between feature names and indices")
("ignore-features-file,i", po::value<std::string>(&filenameIgnoreFeatureNames)->required(),
"input file containing list of feature names to be ignored")
("n-best-size-limit,l", po::value<size_t>(&maxNBestSize)->default_value(100),
"limit of n-best list entries to be considered")
;
po::variables_map vm;
po::store(po::parse_command_line(argc, argv, descr), vm);
if (vm.count("help")) {
std::ostringstream os;
os << descr;
std::cout << os.str() << '\n';
exit(0);
}
po::notify(vm);
} catch(std::exception& e) {
err << "Error: " << e.what() << '\n';
err.flush();
exit(1);
}
util::FilePiece ifsNBest(filenameNBestListIn.c_str());
util::FilePiece ifsIgnoreFeatureNames(filenameIgnoreFeatureNames.c_str());
util::scoped_fd fdFeatureNames(util::CreateOrThrow(filenameFeatureNamesOut.c_str()));
util::FileStream ofsFeatureNames(fdFeatureNames.get());
util::FileStream ofsNBest(1);
boost::unordered_set<std::string> ignoreFeatureNames;
StringPiece line;
while ( ifsIgnoreFeatureNames.ReadLineOrEOF(line) )
{
if ( !line.empty() ) {
util::TokenIter<util::AnyCharacter> item(line, " \t=");
if ( item != item.end() )
{
ignoreFeatureNames.insert(item->as_string());
}
err << "ignoring " << *item << '\n';
}
}
size_t maxFeatureNamesIdx = 0;
boost::unordered_map<std::string, size_t> featureNames;
size_t sentenceIndex = 0;
size_t nBestSizeCount = 0;
size_t globalIndex = 0;
while ( ifsNBest.ReadLineOrEOF(line) )
{
util::TokenIter<util::MultiCharacter> item(line, " ||| ");
if ( item == item.end() )
{
err << "Error: flawed content in " << filenameNBestListIn << '\n';
exit(1);
}
size_t sentenceIndexCurrent = atol( item->as_string().c_str() );
if ( sentenceIndex != sentenceIndexCurrent )
{
nBestSizeCount = 0;
sentenceIndex = sentenceIndexCurrent;
}
if ( nBestSizeCount < maxNBestSize )
{
// process n-best list entry
StringPiece scores;
StringPiece decoderScore;
for (size_t nItem=1; nItem<=3; ++nItem)
{
if ( ++item == item.end() ) {
err << "Error: flawed content in " << filenameNBestListIn << '\n';
exit(1);
}
if (nItem == 2) {
scores = *item;
}
if (nItem == 3) {
decoderScore = *item;
}
}
ofsNBest << sentenceIndex << ' '
<< decoderScore;
util::TokenIter<util::SingleCharacter> token(scores, ' ');
std::string featureNameCurrent("ERROR");
std::string featureNameCurrentBase("ERROR");
bool ignore = false;
int scoreComponentIndex = 0;
while ( token != token.end() )
{
if ( token->ends_with("=") )
{
scoreComponentIndex = 0;
featureNameCurrent = token->substr(0,token->size()-1).as_string();
size_t idx = featureNameCurrent.find_first_of('_');
if ( idx == StringPiece::npos ) {
featureNameCurrentBase = featureNameCurrent;
} else {
featureNameCurrentBase = featureNameCurrent.substr(0,idx+1);
}
ignore = false;
if ( ignoreFeatureNames.find(featureNameCurrentBase) != ignoreFeatureNames.end() )
{
ignore = true;
} else {
if ( (featureNameCurrent.compare(featureNameCurrentBase)) &&
(ignoreFeatureNames.find(featureNameCurrent) != ignoreFeatureNames.end()) )
{
ignore = true;
}
}
}
else
{
if ( !ignore )
{
float featureValueCurrent = atof( token->as_string().c_str() );;
if ( scoreComponentIndex > 0 )
{
std::ostringstream oss;
oss << scoreComponentIndex;
featureNameCurrent.append("+");
}
if ( featureValueCurrent != 0 )
{
boost::unordered_map<std::string, size_t>::iterator featureName = featureNames.find(featureNameCurrent);
if ( featureName == featureNames.end() )
{
std::pair< boost::unordered_map<std::string, size_t>::iterator, bool> inserted =
featureNames.insert( std::make_pair(featureNameCurrent, maxFeatureNamesIdx) );
++maxFeatureNamesIdx;
featureName = inserted.first;
}
ofsNBest << ' ' << featureName->second // feature name index
<< ' ' << *token; // feature value
}
++scoreComponentIndex;
}
}
++token;
}
ofsNBest << '\n';
++nBestSizeCount;
}
++globalIndex;
}
ofsFeatureNames << maxFeatureNamesIdx << '\n';
for (boost::unordered_map<std::string, size_t>::const_iterator featureNamesIt=featureNames.begin();
featureNamesIt!=featureNames.end(); ++featureNamesIt)
{
ofsFeatureNames << featureNamesIt->second << ' ' << featureNamesIt->first << '\n';
}
}

View File

@ -0,0 +1,379 @@
/*
Moses - statistical machine translation system
Copyright (C) 2005-2015 University of Edinburgh
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "ExpectedBleuOptimizer.h"
#include "util/file_stream.hh"
#include "util/file_piece.hh"
#include "util/string_piece.hh"
#include "util/tokenize_piece.hh"
#include <sstream>
#include <boost/program_options.hpp>
using namespace ExpectedBleuTraining;
namespace po = boost::program_options;
int main(int argc, char **argv) {
util::FileStream out(1);
util::FileStream err(2);
size_t maxNBestSize;
size_t iterationLimit;
std::string filenameSBleu, filenameNBestList, filenameFeatureNames, filenameInitialWeights;
bool ignoreDecoderScore;
float learningRate;
float initialStepSize;
float decreaseRate;
float increaseRate;
float minStepSize;
float maxStepSize;
float floorAbsScalingFactor;
float regularizationParameter;
bool printZeroWeights;
bool miniBatches;
std::string optimizerTypeStr;
size_t optimizerType = 0;
#define EXPECTED_BLEU_OPTIMIZER_TYPE_RPROP 1
#define EXPECTED_BLEU_OPTIMIZER_TYPE_SGD 2
try {
po::options_description descr("Usage");
descr.add_options()
("help,h", "produce help message")
("n-best-size-limit,l", po::value<size_t>(&maxNBestSize)->default_value(100),
"limit of n-best list entries to be considered for training")
("iterations,i", po::value<size_t>(&iterationLimit)->default_value(50),
"number of training iterations")
("sbleu-file,b", po::value<std::string>(&filenameSBleu)->required(),
"file containing sentence-level BLEU scores for all n-best list entries")
("prepared-n-best-list,n", po::value<std::string>(&filenameNBestList)->required(),
"input n-best list file, in prepared format for expected BLEU training")
("feature-name-file,f", po::value<std::string>(&filenameFeatureNames)->required(),
"file containing mapping between feature names and indices")
("initial-weights-file,w", po::value<std::string>(&filenameInitialWeights)->default_value(""),
"file containing start values for scaling factors (optional)")
("ignore-decoder-score", boost::program_options::value<bool>(&ignoreDecoderScore)->default_value(0),
"exclude decoder score from computation of posterior probability")
("regularization", boost::program_options::value<float>(&regularizationParameter)->default_value(0), // e.g. 1e-5
"regularization parameter; suggested value range: [1e-8,1e-5]")
("learning-rate", boost::program_options::value<float>(&learningRate)->default_value(1),
"learning rate for the SGD optimizer")
("floor", boost::program_options::value<float>(&floorAbsScalingFactor)->default_value(0), // e.g. 1e-7
"set scaling factor to 0 if below this absolute value after update")
("initial-step-size", boost::program_options::value<float>(&initialStepSize)->default_value(0.001), // TODO: try 0.01 and 0.1
"initial step size for the RPROP optimizer")
("decrease-rate", boost::program_options::value<float>(&decreaseRate)->default_value(0.5),
"decrease rate for the RPROP optimizer")
("increase-rate", boost::program_options::value<float>(&increaseRate)->default_value(1.2),
"increase rate for the RPROP optimizer")
("min-step-size", boost::program_options::value<float>(&minStepSize)->default_value(1e-7),
"minimum step size for the RPROP optimizer")
("max-step-size", boost::program_options::value<float>(&maxStepSize)->default_value(1),
"maximum step size for the RPROP optimizer")
("print-zero-weights", boost::program_options::value<bool>(&printZeroWeights)->default_value(0),
"output scaling factors even if they are trained to 0")
("optimizer", po::value<std::string>(&optimizerTypeStr)->default_value("RPROP"),
"optimizer type used for training (known algorithms: RPROP, SGD)")
("mini-batches", boost::program_options::value<bool>(&miniBatches)->default_value(0),
"update after every single sentence (SGD only)")
;
po::variables_map vm;
po::store(po::parse_command_line(argc, argv, descr), vm);
if (vm.count("help")) {
std::ostringstream os;
os << descr;
out << os.str() << '\n';
out.flush();
exit(0);
}
po::notify(vm);
} catch(std::exception& e) {
err << "Error: " << e.what() << '\n';
err.flush();
exit(1);
}
if ( !optimizerTypeStr.compare("rprop") || !optimizerTypeStr.compare("RPROP") ) {
optimizerType = EXPECTED_BLEU_OPTIMIZER_TYPE_RPROP;
} else if ( !optimizerTypeStr.compare("sgd") || !optimizerTypeStr.compare("SGD") ) {
optimizerType = EXPECTED_BLEU_OPTIMIZER_TYPE_SGD;
} else {
err << "Error: unknown optimizer type: \"" << optimizerTypeStr << "\" (known optimizers: rprop, sgd) " << '\n';
err.flush();
exit(1);
}
util::FilePiece ifsFeatureNames(filenameFeatureNames.c_str());
StringPiece lineFeatureName;
if ( !ifsFeatureNames.ReadLineOrEOF(lineFeatureName) )
{
err << "Error: flawed content in " << filenameFeatureNames << '\n';
err.flush();
exit(1);
}
size_t maxFeatureNamesIdx = atol( lineFeatureName.as_string().c_str() );
std::vector<std::string> featureNames(maxFeatureNamesIdx);
boost::unordered_map<std::string, size_t> featureIndexes;
for (size_t i=0; i<maxFeatureNamesIdx; ++i)
{
if ( !ifsFeatureNames.ReadLineOrEOF(lineFeatureName) ) {
err << "Error: flawed content in " << filenameFeatureNames << '\n';
err.flush();
exit(1);
}
util::TokenIter<util::SingleCharacter> token(lineFeatureName, ' ');
size_t featureIndexCurrent = atol( token->as_string().c_str() );
token++;
featureNames[featureIndexCurrent] = token->as_string();
featureIndexes[token->as_string()] = featureIndexCurrent;
}
std::vector<float> sparseScalingFactor(maxFeatureNamesIdx);
std::vector< boost::unordered_map<size_t, float> > sparseScore(maxNBestSize);
// read initial weights, if any given
if ( filenameInitialWeights.length() != 0 )
{
util::FilePiece ifsInitialWeights(filenameInitialWeights.c_str());
StringPiece lineInitialWeight;
if ( !ifsInitialWeights.ReadLineOrEOF(lineInitialWeight) ) {
err << "Error: flawed content in " << filenameInitialWeights << '\n';
err.flush();
exit(1);
}
do {
util::TokenIter<util::SingleCharacter> token(lineInitialWeight, ' ');
boost::unordered_map<std::string, size_t>::const_iterator found = featureIndexes.find(token->as_string());
if ( found == featureIndexes.end() ) {
err << "Error: flawed content in " << filenameInitialWeights << " (unkown feature name \"" << token->as_string() << "\")" << '\n';
err.flush();
exit(1);
}
token++;
sparseScalingFactor[found->second] = atof( token->as_string().c_str() );
} while ( ifsInitialWeights.ReadLineOrEOF(lineInitialWeight) );
}
// train
ExpectedBleuOptimizer optimizer(err,
learningRate,
initialStepSize,
decreaseRate,
increaseRate,
minStepSize,
maxStepSize,
floorAbsScalingFactor,
regularizationParameter);
if ( optimizerType == EXPECTED_BLEU_OPTIMIZER_TYPE_RPROP )
{
optimizer.InitRPROP(sparseScalingFactor);
} else if ( optimizerType == EXPECTED_BLEU_OPTIMIZER_TYPE_SGD ) {
optimizer.InitRPROP(sparseScalingFactor);
} else {
err << "Error: unknown optimizer type" << '\n';
err.flush();
exit(1);
}
for (size_t nIteration=1; nIteration<=iterationLimit; ++nIteration)
{
util::FilePiece ifsSBleu(filenameSBleu.c_str());
util::FilePiece ifsNBest(filenameNBestList.c_str());
out << "### ITERATION " << nIteration << '\n' << '\n';
size_t sentenceIndex = 0;
size_t batchSize = 0;
size_t nBestSizeCount = 0;
size_t globalIndex = 0;
StringPiece lineNBest;
std::vector<double> overallScoreUntransformed;
std::vector<float> sBleu;
float xBleu = 0;
// double expPrecisionCorrection = 0.0;
while ( ifsNBest.ReadLineOrEOF(lineNBest) )
{
util::TokenIter<util::SingleCharacter> token(lineNBest, ' ');
if ( token == token.end() )
{
err << "Error: flawed content in " << filenameNBestList << '\n';
err.flush();
exit(1);
}
size_t sentenceIndexCurrent = atol( token->as_string().c_str() );
token++;
if ( sentenceIndex != sentenceIndexCurrent )
{
if ( optimizerType == EXPECTED_BLEU_OPTIMIZER_TYPE_RPROP )
{
optimizer.AddTrainingInstance( nBestSizeCount, sBleu, overallScoreUntransformed, sparseScore );
} else if ( optimizerType == EXPECTED_BLEU_OPTIMIZER_TYPE_SGD ) {
optimizer.AddTrainingInstance( nBestSizeCount, sBleu, overallScoreUntransformed, sparseScore, miniBatches );
if ( miniBatches ) {
xBleu += optimizer.UpdateSGD( sparseScalingFactor, 1 );
// out << "ITERATION " << nIteration << " SENTENCE " << sentenceIndex << " XBLEUSUM= " << xBleu << '\n';
// for (size_t i=0; i<sparseScalingFactor.size(); ++i)
// {
// if ( (sparseScalingFactor[i] != 0) || printZeroWeights )
// {
// out << "ITERATION " << nIteration << " WEIGHT " << featureNames[i] << " " << sparseScalingFactor[i] << '\n';
// }
// }
// out << '\n';
// out.flush();
}
} else {
err << "Error: unknown optimizer type" << '\n';
err.flush();
exit(1);
}
for (size_t i=0; i<nBestSizeCount; ++i) {
sparseScore[i].clear();
}
nBestSizeCount = 0;
overallScoreUntransformed.clear();
sBleu.clear();
sentenceIndex = sentenceIndexCurrent;
++batchSize;
}
StringPiece lineSBleu;
if ( !ifsSBleu.ReadLineOrEOF(lineSBleu) )
{
err << "Error: insufficient number of lines in " << filenameSBleu << '\n';
err.flush();
exit(1);
}
if ( nBestSizeCount < maxNBestSize )
{
// retrieve sBLEU
float sBleuCurrent = atof( lineSBleu.as_string().c_str() );
sBleu.push_back(sBleuCurrent);
// process n-best list entry
if ( token == token.end() )
{
err << "Error: flawed content in " << filenameNBestList << '\n';
err.flush();
exit(1);
}
double scoreCurrent = 0;
if ( !ignoreDecoderScore )
{
scoreCurrent = atof( token->as_string().c_str() ); // decoder score
}
token++;
// if ( nBestSizeCount == 0 ) // best translation (first n-best list entry for the current sentence / a new mini-batch)
// {
// expPrecisionCorrection = std::floor ( scoreCurrent ); // decoder score of first-best
// }
while (token != token.end())
{
size_t featureNameCurrent = atol( token->as_string().c_str() );
token++;
float featureValueCurrent = atof( token->as_string().c_str() );
sparseScore[nBestSizeCount].insert(std::make_pair(featureNameCurrent, featureValueCurrent));
scoreCurrent += sparseScalingFactor[featureNameCurrent] * featureValueCurrent;
token++;
}
// overallScoreUntransformed.push_back( std::exp(scoreCurrent - expPrecisionCorrection) );
overallScoreUntransformed.push_back( std::exp(scoreCurrent) );
++nBestSizeCount;
}
++globalIndex;
}
if ( optimizerType == EXPECTED_BLEU_OPTIMIZER_TYPE_RPROP )
{
optimizer.AddTrainingInstance( nBestSizeCount, sBleu, overallScoreUntransformed, sparseScore ); // last sentence in the corpus
xBleu = optimizer.UpdateRPROP( sparseScalingFactor, batchSize );
out << "xBLEU= " << xBleu << '\n';
} else if ( optimizerType == EXPECTED_BLEU_OPTIMIZER_TYPE_SGD ) {
optimizer.AddTrainingInstance( nBestSizeCount, sBleu, overallScoreUntransformed, sparseScore, miniBatches ); // last sentence in the corpus
if ( miniBatches ) {
xBleu += optimizer.UpdateSGD( sparseScalingFactor, 1 );
xBleu /= batchSize;
} else {
xBleu = optimizer.UpdateSGD( sparseScalingFactor, batchSize );
}
out << "xBLEU= " << xBleu << '\n';
} else {
err << "Error: unknown optimizer type" << '\n';
err.flush();
exit(1);
}
for (size_t i=0; i<nBestSizeCount; ++i) {
sparseScore[i].clear();
}
nBestSizeCount = 0;
overallScoreUntransformed.clear();
sBleu.clear();
out << '\n';
for (size_t i=0; i<sparseScalingFactor.size(); ++i)
{
if ( (sparseScalingFactor[i] != 0) || printZeroWeights )
{
out << "ITERATION " << nIteration << " WEIGHT " << featureNames[i] << " " << sparseScalingFactor[i] << '\n';
}
}
out << '\n';
out.flush();
}
}

View File

@ -221,7 +221,7 @@ Copyright © 2010-2014 Precision Translation Tools Co., Ltd.
This module is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the License, or
the Free Software Foundation, either version 2.1 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,

View File

@ -42,6 +42,7 @@
<option id="gnu.cpp.link.option.libs.1325292383" name="Libraries (-l)" superClass="gnu.cpp.link.option.libs" valueType="libs">
<listOptionValue builtIn="false" value="OnDiskPt"/>
<listOptionValue builtIn="false" value="moses"/>
<listOptionValue builtIn="false" value="cmph"/>
<listOptionValue builtIn="false" value="search"/>
<listOptionValue builtIn="false" value="lm"/>
<listOptionValue builtIn="false" value="util"/>
@ -59,6 +60,7 @@
</option>
<option id="gnu.cpp.link.option.paths.815001500" name="Library search path (-L)" superClass="gnu.cpp.link.option.paths" valueType="libPaths">
<listOptionValue builtIn="false" value="&quot;${workspace_loc:}/../../boost/lib64&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc}/../../cmph/lib&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc:}/search/Debug&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc:}/OnDiskPt/Debug&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc:}/util/Debug&quot;"/>

View File

@ -28,7 +28,7 @@
<listOptionValue builtIn="false" value="/opt/local/include/"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc}/../../boost/include&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc}/../..&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc}/../../boost/include&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc}/../../cmph/include&quot;"/>
</option>
<option id="gnu.cpp.compiler.option.preprocessor.def.849384962" name="Defined symbols (-D)" superClass="gnu.cpp.compiler.option.preprocessor.def" valueType="definedSymbols">
<listOptionValue builtIn="false" value="WITH_THREADS"/>
@ -47,6 +47,7 @@
<tool id="cdt.managedbuild.tool.gnu.cpp.linker.exe.debug.1546774818" name="GCC C++ Linker" superClass="cdt.managedbuild.tool.gnu.cpp.linker.exe.debug">
<option id="gnu.cpp.link.option.paths.523170942" name="Library search path (-L)" superClass="gnu.cpp.link.option.paths" valueType="libPaths">
<listOptionValue builtIn="false" value="&quot;${workspace_loc:}/../../boost/lib64&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc}/../../cmph/lib&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc:}/moses/Debug&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc:}/lm/Debug&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc:}/OnDiskPt/Debug&quot;"/>
@ -56,6 +57,7 @@
</option>
<option id="gnu.cpp.link.option.libs.998577284" name="Libraries (-l)" superClass="gnu.cpp.link.option.libs" valueType="libs">
<listOptionValue builtIn="false" value="moses"/>
<listOptionValue builtIn="false" value="cmph"/>
<listOptionValue builtIn="false" value="search"/>
<listOptionValue builtIn="false" value="OnDiskPt"/>
<listOptionValue builtIn="false" value="lm"/>

View File

@ -19,7 +19,7 @@
</extensions>
</storageModule>
<storageModule moduleId="cdtBuildSystem" version="4.0.0">
<configuration artifactExtension="a" artifactName="${ProjName}" buildArtefactType="org.eclipse.cdt.build.core.buildArtefactType.staticLib" buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.debug,org.eclipse.cdt.build.core.buildArtefactType=org.eclipse.cdt.build.core.buildArtefactType.staticLib" cleanCommand="rm -rf" description="" id="cdt.managedbuild.config.gnu.exe.debug.1846963597" name="Debug" parent="cdt.managedbuild.config.gnu.exe.debug">
<configuration artifactExtension="a" artifactName="${ProjName}" buildArtefactType="org.eclipse.cdt.build.core.buildArtefactType.staticLib" buildProperties="org.eclipse.cdt.build.core.buildArtefactType=org.eclipse.cdt.build.core.buildArtefactType.staticLib,org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.debug" cleanCommand="rm -rf" description="" id="cdt.managedbuild.config.gnu.exe.debug.1846963597" name="Debug" parent="cdt.managedbuild.config.gnu.exe.debug">
<folderInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597." name="/" resourcePath="">
<toolChain id="cdt.managedbuild.toolchain.gnu.exe.debug.1167373278" name="Linux GCC" superClass="cdt.managedbuild.toolchain.gnu.exe.debug">
<targetPlatform id="cdt.managedbuild.target.gnu.platform.exe.debug.397694981" name="Debug Platform" superClass="cdt.managedbuild.target.gnu.platform.exe.debug"/>
@ -31,8 +31,11 @@
<option id="gnu.cpp.compiler.option.include.paths.876218169" name="Include paths (-I)" superClass="gnu.cpp.compiler.option.include.paths" valueType="includePath">
<listOptionValue builtIn="false" value="&quot;${workspace_loc}/../..&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc}/../../boost/include&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc}/../../cmph/include&quot;"/>
</option>
<option id="gnu.cpp.compiler.option.preprocessor.def.53427549" name="Defined symbols (-D)" superClass="gnu.cpp.compiler.option.preprocessor.def" valueType="definedSymbols">
<listOptionValue builtIn="false" value="PT_UG"/>
<listOptionValue builtIn="false" value="HAVE_CMPH"/>
<listOptionValue builtIn="false" value="MAX_NUM_FACTORS=4"/>
<listOptionValue builtIn="false" value="KENLM_MAX_ORDER=7"/>
<listOptionValue builtIn="false" value="WITH_THREADS"/>
@ -58,18 +61,18 @@
</tool>
</toolChain>
</folderInfo>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1761300858" name="ParallelBackoff.h" rcbsApplicability="disable" resourcePath="LM/ParallelBackoff.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1815042864" name="SRI.h" rcbsApplicability="disable" resourcePath="LM/SRI.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1720439764" name="NeuralLMWrapper.h" rcbsApplicability="disable" resourcePath="LM/NeuralLMWrapper.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1094892289" name="MaxEntSRI.h" rcbsApplicability="disable" resourcePath="LM/MaxEntSRI.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1113398114" name="Rand.h" rcbsApplicability="disable" resourcePath="LM/Rand.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1183410636" name="ORLM.h" rcbsApplicability="disable" resourcePath="LM/ORLM.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1448475064" name="IRST.h" rcbsApplicability="disable" resourcePath="LM/IRST.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1459438132" name="DALMWrapper.h" rcbsApplicability="disable" resourcePath="LM/DALMWrapper.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1094892289" name="MaxEntSRI.h" rcbsApplicability="disable" resourcePath="LM/MaxEntSRI.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1720439764" name="NeuralLMWrapper.h" rcbsApplicability="disable" resourcePath="LM/NeuralLMWrapper.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1272004353" name="BilingualLM.h" rcbsApplicability="disable" resourcePath="LM/BilingualLM.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1815042864" name="SRI.h" rcbsApplicability="disable" resourcePath="LM/SRI.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1459438132" name="DALMWrapper.h" rcbsApplicability="disable" resourcePath="LM/DALMWrapper.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.871386239" name="LDHT.h" rcbsApplicability="disable" resourcePath="LM/LDHT.h" toolsToInvoke=""/>
<fileInfo id="cdt.managedbuild.config.gnu.exe.debug.1846963597.1761300858" name="ParallelBackoff.h" rcbsApplicability="disable" resourcePath="LM/ParallelBackoff.h" toolsToInvoke=""/>
<sourceEntries>
<entry excluding="LM/ParallelBackoff.h|LM/ParallelBackoff.cpp|LM/bilingual-lm|LM/MaxEntSRI.h|LM/MaxEntSRI.cpp|LM/BilingualLM.h|LM/BilingualLM.cpp|TranslationModel/CompactPT|LM/Rand.h|LM/Rand.cpp|LM/LDHT.h|LM/LDHT.cpp|LM/ORLM.h|LM/ORLM.cpp|LM/NeuralLMWrapper.h|LM/NeuralLMWrapper.cpp|LM/SRI.h|LM/SRI.cpp|LM/IRST.h|LM/IRST.cpp|LM/DALMWrapper.h|LM/DALMWrapper.cpp|LM/oxlm|TranslationModel/ProbingPT|TranslationModel/UG|TranslationModel/UG/util" flags="VALUE_WORKSPACE_PATH|RESOLVED" kind="sourcePath" name=""/>
<entry excluding="TranslationModel/UG/mm/test-http-client.cc|TranslationModel/UG/ptable-describe-features.cc|TranslationModel/UG/count-ptable-features.cc|TranslationModel/UG/try-align2.cc|TranslationModel/UG/try-align.cc|TranslationModel/UG/spe-check-coverage3.cc|TranslationModel/UG/spe-check-coverage2.cc|TranslationModel/UG/spe-check-coverage.cc|TranslationModel/UG/sim-pe.cc|TranslationModel/UG/generic/stringdist|TranslationModel/UG/mm/test-dynamic-im-tsa.cc|TranslationModel/UG/mm/mtt.count.cc|LM/ParallelBackoff.h|LM/ParallelBackoff.cpp|LM/bilingual-lm|LM/MaxEntSRI.h|LM/MaxEntSRI.cpp|LM/BilingualLM.h|LM/BilingualLM.cpp|LM/Rand.h|LM/Rand.cpp|LM/LDHT.h|LM/LDHT.cpp|LM/ORLM.h|LM/ORLM.cpp|LM/NeuralLMWrapper.h|LM/NeuralLMWrapper.cpp|LM/SRI.h|LM/SRI.cpp|LM/IRST.h|LM/IRST.cpp|LM/DALMWrapper.h|LM/DALMWrapper.cpp|LM/oxlm|TranslationModel/ProbingPT|TranslationModel/UG/util" flags="VALUE_WORKSPACE_PATH|RESOLVED" kind="sourcePath" name=""/>
</sourceEntries>
</configuration>
</storageModule>
@ -88,7 +91,7 @@
</extensions>
</storageModule>
<storageModule moduleId="cdtBuildSystem" version="4.0.0">
<configuration artifactName="${ProjName}" buildArtefactType="org.eclipse.cdt.build.core.buildArtefactType.exe" buildProperties="org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.release,org.eclipse.cdt.build.core.buildArtefactType=org.eclipse.cdt.build.core.buildArtefactType.exe" cleanCommand="rm -rf" description="" id="cdt.managedbuild.config.gnu.exe.release.1911984684" name="Release" parent="cdt.managedbuild.config.gnu.exe.release">
<configuration artifactName="${ProjName}" buildArtefactType="org.eclipse.cdt.build.core.buildArtefactType.exe" buildProperties="org.eclipse.cdt.build.core.buildArtefactType=org.eclipse.cdt.build.core.buildArtefactType.exe,org.eclipse.cdt.build.core.buildType=org.eclipse.cdt.build.core.buildType.release" cleanCommand="rm -rf" description="" id="cdt.managedbuild.config.gnu.exe.release.1911984684" name="Release" parent="cdt.managedbuild.config.gnu.exe.release">
<folderInfo id="cdt.managedbuild.config.gnu.exe.release.1911984684." name="/" resourcePath="">
<toolChain id="cdt.managedbuild.toolchain.gnu.exe.release.1552241309" name="Linux GCC" superClass="cdt.managedbuild.toolchain.gnu.exe.release">
<targetPlatform id="cdt.managedbuild.target.gnu.platform.exe.release.332871558" name="Debug Platform" superClass="cdt.managedbuild.target.gnu.platform.exe.release"/>
@ -141,10 +144,10 @@
</storageModule>
<storageModule moduleId="org.eclipse.cdt.core.LanguageSettingsProviders"/>
<storageModule moduleId="refreshScope" versionNumber="2">
<configuration configurationName="Release">
<configuration configurationName="Debug">
<resource resourceType="PROJECT" workspacePath="/moses"/>
</configuration>
<configuration configurationName="Debug">
<configuration configurationName="Release">
<resource resourceType="PROJECT" workspacePath="/moses"/>
</configuration>
</storageModule>

View File

@ -1255,6 +1255,16 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/FF/DecodeFeature.h</locationURI>
</link>
<link>
<name>FF/DeleteRules.cpp</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/FF/DeleteRules.cpp</locationURI>
</link>
<link>
<name>FF/DeleteRules.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/FF/DeleteRules.h</locationURI>
</link>
<link>
<name>FF/DistortionScoreProducer.cpp</name>
<type>1</type>
@ -2135,16 +2145,6 @@
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/BilingualDynSuffixArray.cpp</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/BilingualDynSuffixArray.cpp</locationURI>
</link>
<link>
<name>TranslationModel/BilingualDynSuffixArray.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/BilingualDynSuffixArray.h</locationURI>
</link>
<link>
<name>TranslationModel/CYKPlusParser</name>
<type>2</type>
@ -2155,21 +2155,6 @@
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude</name>
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/DynSuffixArray.cpp</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSuffixArray.cpp</locationURI>
</link>
<link>
<name>TranslationModel/DynSuffixArray.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSuffixArray.h</locationURI>
</link>
<link>
<name>TranslationModel/PhraseDictionary.cpp</name>
<type>1</type>
@ -2185,16 +2170,6 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/PhraseDictionaryDynSuffixArray.README</locationURI>
</link>
<link>
<name>TranslationModel/PhraseDictionaryDynSuffixArray.cpp</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/PhraseDictionaryDynSuffixArray.cpp</locationURI>
</link>
<link>
<name>TranslationModel/PhraseDictionaryDynSuffixArray.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/PhraseDictionaryDynSuffixArray.h</locationURI>
</link>
<link>
<name>TranslationModel/PhraseDictionaryDynamicCacheBased.cpp</name>
<type>1</type>
@ -2205,6 +2180,16 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/PhraseDictionaryDynamicCacheBased.h</locationURI>
</link>
<link>
<name>TranslationModel/PhraseDictionaryGroup.cpp</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/PhraseDictionaryGroup.cpp</locationURI>
</link>
<link>
<name>TranslationModel/PhraseDictionaryGroup.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/PhraseDictionaryGroup.h</locationURI>
</link>
<link>
<name>TranslationModel/PhraseDictionaryMemory.cpp</name>
<type>1</type>
@ -2315,16 +2300,6 @@
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/WordCoocTable.cpp</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/WordCoocTable.cpp</locationURI>
</link>
<link>
<name>TranslationModel/WordCoocTable.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/WordCoocTable.h</locationURI>
</link>
<link>
<name>TranslationModel/fuzzy-match</name>
<type>2</type>
@ -3190,86 +3165,6 @@
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/FileHandler.cpp</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/FileHandler.cpp</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/FileHandler.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/FileHandler.h</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/Jamfile</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/Jamfile</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/RandLMCache.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/RandLMCache.h</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/RandLMFilter.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/RandLMFilter.h</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/fdstream.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/fdstream.h</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/hash.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/hash.h</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/onlineRLM.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/onlineRLM.h</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/params.cpp</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/params.cpp</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/params.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/params.h</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/perfectHash.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/perfectHash.h</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/quantizer.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/quantizer.h</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/types.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/types.h</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/utils.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/utils.h</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/vocab.cpp</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/vocab.cpp</locationURI>
</link>
<link>
<name>TranslationModel/DynSAInclude/vocab.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/DynSAInclude/vocab.h</locationURI>
</link>
<link>
<name>TranslationModel/ProbingPT/Jamfile</name>
<type>1</type>
@ -3540,6 +3435,16 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/Makefile</locationURI>
</link>
<link>
<name>TranslationModel/UG/TargetPhraseCollectionCache.cc</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/TargetPhraseCollectionCache.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/TargetPhraseCollectionCache.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/TargetPhraseCollectionCache.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/bin</name>
<type>2</type>
@ -3585,11 +3490,6 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/ptable-lookup.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/sapt_phrase_key.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/sapt_phrase_key.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/sapt_phrase_scorers.h</name>
<type>1</type>
@ -3935,6 +3835,16 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/test-dynamic-im-tsa.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/test-http-client.cc</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/test-http-client.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/test-xml-escaping.cc</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/test-xml-escaping.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/tpt_pickler.cc</name>
<type>1</type>
@ -3980,6 +3890,56 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_bitext.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_bitext_agenda.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_bitext_agenda.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_bitext_agenda_job.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_bitext_agenda_job.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_bitext_agenda_worker.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_bitext_agenda_worker.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_bitext_jstats.cc</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_bitext_jstats.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_bitext_jstats.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_bitext_jstats.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_bitext_moses.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_bitext_moses.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_bitext_phrase_extraction_record.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_bitext_phrase_extraction_record.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_bitext_pstats.cc</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_bitext_pstats.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_bitext_pstats.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_bitext_pstats.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_bitext_sampler.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_bitext_sampler.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_conll_bottom_up_token.h</name>
<type>1</type>
@ -4015,6 +3975,26 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_deptree.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_http_client.cc</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_http_client.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_http_client.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_http_client.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_im_bitext.cc</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_im_bitext.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_im_bitext.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_im_bitext.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_im_tsa.h</name>
<type>1</type>
@ -4035,6 +4015,16 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_lexical_phrase_scorer2.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_lexical_reordering.cc</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_lexical_reordering.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_lexical_reordering.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_lexical_reordering.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_load_primer.cc</name>
<type>1</type>
@ -4055,6 +4045,11 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_mm_2d_table.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_mm_bitext.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_mm_bitext.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_mm_tsa.h</name>
<type>1</type>
@ -4070,16 +4065,6 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_mm_ttrack.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_mmbitext.cc</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_mmbitext.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_mmbitext.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_mmbitext.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_phrasepair.cc</name>
<type>1</type>
@ -4090,6 +4075,21 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_phrasepair.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_prep_phrases.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_prep_phrases.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_sampling_bias.cc</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_sampling_bias.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_sampling_bias.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/ug_sampling_bias.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/ug_tsa_array_entry.cc</name>
<type>1</type>
@ -4315,6 +4315,21 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/generic/stringdist/ug_stringdist.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/generic/threading/ug_ref_counter.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/generic/threading/ug_ref_counter.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/generic/threading/ug_thread_pool.cc</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/generic/threading/ug_thread_pool.cc</locationURI>
</link>
<link>
<name>TranslationModel/UG/generic/threading/ug_thread_pool.h</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/generic/threading/ug_thread_pool.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/generic/threading/ug_thread_safe_counter.cc</name>
<type>1</type>
@ -4325,6 +4340,11 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/generic/threading/ug_thread_safe_counter.h</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1</name>
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/gcc-4.8</name>
<type>2</type>
@ -4365,6 +4385,11 @@
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release</name>
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/gcc-4.8/release</name>
<type>2</type>
@ -4400,6 +4425,11 @@
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on</name>
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/gcc-4.8/release/debug-symbols-on</name>
<type>2</type>
@ -4975,6 +5005,11 @@
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static</name>
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/gcc-4.8/release/debug-symbols-on/link-static</name>
<type>2</type>
@ -5570,6 +5605,11 @@
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi</name>
<type>2</type>
<locationURI>virtual:/virtual</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/gcc-4.8/release/debug-symbols-on/link-static/threading-multi</name>
<type>2</type>
@ -5825,6 +5865,201 @@
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/generic/bin/gcc-4.8/release/debug-symbols-on/link-static/threading-multi/ug_thread_safe_counter.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/calc-coverage</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/calc-coverage</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/calc-coverage.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/calc-coverage.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mam2symal</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mam2symal</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mam2symal.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mam2symal.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mam_verify</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mam_verify</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mam_verify.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mam_verify.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mmlex-build</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mmlex-build</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mmlex-build.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mmlex-build.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mmlex-lookup</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mmlex-lookup</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mmlex-lookup.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mmlex-lookup.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-build</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-build</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-build.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-build.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-count-words</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-count-words</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-count-words.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-count-words.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-demo1</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-demo1</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-demo1.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-demo1.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-dump</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-dump</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-dump.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/mtt-dump.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/num_read_write.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/num_read_write.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/symal2mam</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/symal2mam</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/symal2mam.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/symal2mam.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/tpt_pickler.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/tpt_pickler.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/tpt_tightindex.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/tpt_tightindex.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/tpt_tokenindex.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/tpt_tokenindex.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_bitext.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_bitext.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_bitext_jstats.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_bitext_jstats.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_bitext_pstats.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_bitext_pstats.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_conll_record.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_conll_record.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_corpus_token.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_corpus_token.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_deptree.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_deptree.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_http_client.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_http_client.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_im_bitext.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_im_bitext.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_lexical_reordering.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_lexical_reordering.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_load_primer.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_load_primer.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_phrasepair.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_phrasepair.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_sampling_bias.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_sampling_bias.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_tsa_array_entry.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_tsa_array_entry.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_ttrack_base.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_ttrack_base.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_ttrack_position.o</name>
<type>1</type>
<locationURI>PARENT-3-PROJECT_LOC/moses/TranslationModel/UG/mm/bin/clang-darwin-4.2.1/release/debug-symbols-on/link-static/threading-multi/ug_ttrack_position.o</locationURI>
</link>
<link>
<name>TranslationModel/UG/mm/bin/gcc-4.8/release/debug-symbols-on/link-static/threading-multi/calc-coverage</name>
<type>1</type>

View File

@ -44,6 +44,7 @@
<tool id="cdt.managedbuild.tool.gnu.cpp.linker.exe.debug.1443553047" name="GCC C++ Linker" superClass="cdt.managedbuild.tool.gnu.cpp.linker.exe.debug">
<option id="gnu.cpp.link.option.paths.1096041402" name="Library search path (-L)" superClass="gnu.cpp.link.option.paths" valueType="libPaths">
<listOptionValue builtIn="false" value="&quot;${workspace_loc}/../../xmlrpc-c/lib&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc}/../../cmph/lib&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc:}/search/Debug&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc:}/moses/Debug&quot;"/>
<listOptionValue builtIn="false" value="&quot;${workspace_loc:}/OnDiskPt/Debug&quot;"/>
@ -53,6 +54,7 @@
</option>
<option id="gnu.cpp.link.option.libs.1087215166" name="Libraries (-l)" superClass="gnu.cpp.link.option.libs" valueType="libs">
<listOptionValue builtIn="false" value="moses"/>
<listOptionValue builtIn="false" value="cmph"/>
<listOptionValue builtIn="false" value="search"/>
<listOptionValue builtIn="false" value="OnDiskPt"/>
<listOptionValue builtIn="false" value="lm"/>

View File

@ -71,7 +71,7 @@ foreach(exe ${EXE_LIST})
add_executable(${exe} ${exe}_main.cc $<TARGET_OBJECTS:kenlm> $<TARGET_OBJECTS:kenlm_util>)
# Link the executable against boost
target_link_libraries(${exe} ${Boost_LIBRARIES})
target_link_libraries(${exe} ${Boost_LIBRARIES} pthread)
# Group executables together
set_target_properties(${exe} PROPERTIES FOLDER executables)
@ -104,7 +104,7 @@ if(BUILD_TESTING)
set_target_properties(${test} PROPERTIES COMPILE_FLAGS -DBOOST_TEST_DYN_LINK)
# Link the executable against boost
target_link_libraries(${test} ${Boost_LIBRARIES})
target_link_libraries(${test} ${Boost_LIBRARIES} pthread)
# model_test requires an extra command line parameter
if ("${test}" STREQUAL "model_test")

View File

@ -17,7 +17,7 @@ wrappers = ;
local with-nplm = [ option.get "with-nplm" ] ;
if $(with-nplm) {
lib nplm : : <search>$(with-nplm)/src ;
obj nplm.o : wrappers/nplm.cc : <include>.. <include>$(with-nplm)/src <cxxflags>-fopenmp ;
obj nplm.o : wrappers/nplm.cc : <include>.. <include>$(with-nplm)/src <cxxflags>-fopenmp <include>$(with-nplm)/3rdparty/eigen <define>NPLM_DOUBLE_PRECISION=0 ;
alias nplm-all : nplm.o nplm ..//boost_thread : : : <cxxflags>-fopenmp <linkflags>-fopenmp <define>WITH_NPLM <library>..//boost_thread ;
wrappers += nplm-all ;
}

View File

@ -169,8 +169,7 @@ void *BinaryFormat::SetupJustVocab(std::size_t memory_size, uint8_t order) {
vocab_size_ = memory_size;
if (!write_mmap_) {
header_size_ = 0;
util::MapAnonymous(memory_size, memory_vocab_);
util::AdviseHugePages(memory_vocab_.get(), memory_size);
util::HugeMalloc(memory_size, true, memory_vocab_);
return reinterpret_cast<uint8_t*>(memory_vocab_.get());
}
header_size_ = TotalHeaderSize(order);
@ -181,16 +180,16 @@ void *BinaryFormat::SetupJustVocab(std::size_t memory_size, uint8_t order) {
switch (write_method_) {
case Config::WRITE_MMAP:
mapping_.reset(util::MapZeroedWrite(file_.get(), total), total, util::scoped_memory::MMAP_ALLOCATED);
util::AdviseHugePages(vocab_base, total);
vocab_base = mapping_.get();
break;
case Config::WRITE_AFTER:
util::ResizeOrThrow(file_.get(), 0);
util::MapAnonymous(total, memory_vocab_);
util::HugeMalloc(total, true, memory_vocab_);
vocab_base = memory_vocab_.get();
break;
}
strncpy(reinterpret_cast<char*>(vocab_base), kMagicIncomplete, header_size_);
util::AdviseHugePages(vocab_base, total);
return reinterpret_cast<uint8_t*>(vocab_base) + header_size_;
}
@ -200,7 +199,7 @@ void *BinaryFormat::GrowForSearch(std::size_t memory_size, std::size_t vocab_pad
std::size_t new_size = header_size_ + vocab_size_ + vocab_pad_ + memory_size;
vocab_string_offset_ = new_size;
if (!write_mmap_ || write_method_ == Config::WRITE_AFTER) {
util::MapAnonymous(memory_size, memory_search_);
util::HugeMalloc(memory_size, true, memory_search_);
assert(header_size_ == 0 || write_mmap_);
vocab_base = reinterpret_cast<uint8_t*>(memory_vocab_.get()) + header_size_;
util::AdviseHugePages(memory_search_.get(), memory_size);

View File

@ -45,7 +45,7 @@ add_library(kenlm_builder OBJECT ${KENLM_BUILDER_SOURCE})
add_executable(lmplz lmplz_main.cc $<TARGET_OBJECTS:kenlm> $<TARGET_OBJECTS:kenlm_common> $<TARGET_OBJECTS:kenlm_builder> $<TARGET_OBJECTS:kenlm_util>)
# Link the executable against boost
target_link_libraries(lmplz ${Boost_LIBRARIES})
target_link_libraries(lmplz ${Boost_LIBRARIES} pthread)
# Group executables together
set_target_properties(lmplz PROPERTIES FOLDER executables)
@ -68,7 +68,7 @@ if(BUILD_TESTING)
set_target_properties(${test} PROPERTIES COMPILE_FLAGS "-DBOOST_TEST_DYN_LINK -DBOOST_PROGRAM_OPTIONS_DYN_LINK")
# Link the executable against boost
target_link_libraries(${test} ${Boost_LIBRARIES})
target_link_libraries(${test} ${Boost_LIBRARIES} pthread)
# Specify command arguments for how to run each unit test
#

View File

@ -5,7 +5,7 @@
#include "lm/lm_exception.hh"
#include "lm/vocab.hh"
#include "lm/word_index.hh"
#include "util/fake_ofstream.hh"
#include "util/file_stream.hh"
#include "util/file.hh"
#include "util/file_piece.hh"
#include "util/murmur_hash.hh"

View File

@ -4,21 +4,21 @@
#include "lm/builder/payload.hh"
#include "lm/common/print.hh"
#include "lm/common/ngram_stream.hh"
#include "util/fake_ofstream.hh"
#include "util/file_stream.hh"
#include "util/file.hh"
#include <boost/lexical_cast.hpp>
namespace lm { namespace builder {
// Not defined, only specialized.
template <class T> void PrintPayload(util::FakeOFStream &to, const BuildingPayload &payload);
template <> inline void PrintPayload<uint64_t>(util::FakeOFStream &to, const BuildingPayload &payload) {
template <class T> void PrintPayload(util::FileStream &to, const BuildingPayload &payload);
template <> inline void PrintPayload<uint64_t>(util::FileStream &to, const BuildingPayload &payload) {
to << payload.count;
}
template <> inline void PrintPayload<Uninterpolated>(util::FakeOFStream &to, const BuildingPayload &payload) {
template <> inline void PrintPayload<Uninterpolated>(util::FileStream &to, const BuildingPayload &payload) {
to << log10(payload.uninterp.prob) << ' ' << log10(payload.uninterp.gamma);
}
template <> inline void PrintPayload<ProbBackoff>(util::FakeOFStream &to, const BuildingPayload &payload) {
template <> inline void PrintPayload<ProbBackoff>(util::FileStream &to, const BuildingPayload &payload) {
to << payload.complete.prob << ' ' << payload.complete.backoff;
}
@ -36,7 +36,7 @@ template <class V> class Print {
void Run(const util::stream::ChainPositions &chains) {
util::scoped_fd fd(to_);
util::FakeOFStream out(to_);
util::FileStream out(to_);
NGramStreams<BuildingPayload> streams(chains);
for (NGramStream<BuildingPayload> *s = streams.begin(); s != streams.end(); ++s) {
DumpStream(*s, out);
@ -45,13 +45,13 @@ template <class V> class Print {
void Run(const util::stream::ChainPosition &position) {
util::scoped_fd fd(to_);
util::FakeOFStream out(to_);
util::FileStream out(to_);
NGramStream<BuildingPayload> stream(position);
DumpStream(stream, out);
}
private:
void DumpStream(NGramStream<BuildingPayload> &stream, util::FakeOFStream &to) {
void DumpStream(NGramStream<BuildingPayload> &stream, util::FileStream &to) {
for (; stream; ++stream) {
PrintPayload<V>(to, stream->Value());
for (const WordIndex *w = stream->begin(); w != stream->end(); ++w) {

View File

@ -30,7 +30,7 @@ int main(int argc, char *argv[]) {
UTIL_THROW_IF(*i >= vocab.Size(), util::Exception, "Vocab ID " << *i << " is larger than the vocab file's maximum of " << vocab.Size() << ". Are you sure you have the right order and vocab file for these counts?");
std::cout << vocab.Lookup(*i) << ' ';
}
// TODO don't use std::cout because it is slow. Add fast uint64_t printing support to FakeOFStream.
// TODO don't use std::cout because it is slow. Add fast uint64_t printing support to FileStream.
std::cout << *reinterpret_cast<const uint64_t*>(words + order) << '\n';
}
}

View File

@ -9,6 +9,7 @@
#include "util/fixed_array.hh"
#include "util/murmur_hash.hh"
#include <iostream>
#include <cassert>
#include <cmath>

View File

@ -2,7 +2,7 @@
#include "lm/common/model_buffer.hh"
#include "lm/common/print.hh"
#include "util/fake_ofstream.hh"
#include "util/file_stream.hh"
#include "util/stream/multi_stream.hh"
#include <iostream>
@ -41,7 +41,7 @@ void Output::Apply(HookType hook_type, util::stream::Chains &chains) {
void PrintHook::Sink(const HeaderInfo &info, int vocab_file, util::stream::Chains &chains) {
if (verbose_header_) {
util::FakeOFStream out(file_.get(), 50);
util::FileStream out(file_.get(), 50);
out << "# Input file: " << info.input_file << '\n';
out << "# Token count: " << info.token_count << '\n';
out << "# Smoothing: Modified Kneser-Ney" << '\n';

View File

@ -191,6 +191,9 @@ class Master {
chains_.clear();
std::cerr << "Chain sizes:";
for (std::size_t i = 0; i < config_.order; ++i) {
// Always have enough for at least one record.
// This was crashing if e.g. there was no 5-gram.
assignments[i] = std::max(assignments[i], block_count[i] * NGram<BuildingPayload>::TotalSize(i + 1));
std::cerr << ' ' << (i+1) << ":" << assignments[i];
chains_.push_back(util::stream::ChainConfig(NGram<BuildingPayload>::TotalSize(i + 1), block_count[i], assignments[i]));
}

View File

@ -1,6 +1,6 @@
#include "lm/common/model_buffer.hh"
#include "util/exception.hh"
#include "util/fake_ofstream.hh"
#include "util/file_stream.hh"
#include "util/file.hh"
#include "util/file_piece.hh"
#include "util/stream/io.hh"
@ -68,7 +68,7 @@ void ModelBuffer::Sink(util::stream::Chains &chains, const std::vector<uint64_t>
}
if (keep_buffer_) {
util::scoped_fd metadata(util::CreateOrThrow((file_base_ + ".kenlm_intermediate").c_str()));
util::FakeOFStream meta(metadata.get(), 200);
util::FileStream meta(metadata.get(), 200);
meta << kMetadataHeader << "\nCounts";
for (std::vector<uint64_t>::const_iterator i = counts_.begin(); i != counts_.end(); ++i) {
meta << ' ' << *i;

View File

@ -1,7 +1,7 @@
#include "lm/common/print.hh"
#include "lm/common/ngram_stream.hh"
#include "util/fake_ofstream.hh"
#include "util/file_stream.hh"
#include "util/file.hh"
#include "util/mmap.hh"
#include "util/scoped.hh"
@ -24,7 +24,7 @@ VocabReconstitute::VocabReconstitute(int fd) {
}
namespace {
template <class Payload> void PrintLead(const VocabReconstitute &vocab, ProxyStream<Payload> &stream, util::FakeOFStream &out) {
template <class Payload> void PrintLead(const VocabReconstitute &vocab, ProxyStream<Payload> &stream, util::FileStream &out) {
out << stream->Value().prob << '\t' << vocab.Lookup(*stream->begin());
for (const WordIndex *i = stream->begin() + 1; i != stream->end(); ++i) {
out << ' ' << vocab.Lookup(*i);
@ -34,7 +34,7 @@ template <class Payload> void PrintLead(const VocabReconstitute &vocab, ProxyStr
void PrintARPA::Run(const util::stream::ChainPositions &positions) {
VocabReconstitute vocab(vocab_fd_);
util::FakeOFStream out(out_fd_);
util::FileStream out(out_fd_);
out << "\\data\\\n";
for (size_t i = 0; i < positions.size(); ++i) {
out << "ngram " << (i+1) << '=' << counts_[i] << '\n';

View File

@ -52,7 +52,7 @@ foreach(exe ${EXE_LIST})
add_executable(${exe} ${exe}_main.cc $<TARGET_OBJECTS:kenlm> $<TARGET_OBJECTS:kenlm_filter> $<TARGET_OBJECTS:kenlm_util>)
# Link the executable against boost
target_link_libraries(${exe} ${Boost_LIBRARIES})
target_link_libraries(${exe} ${Boost_LIBRARIES} pthread)
# Group executables together
set_target_properties(${exe} PROPERTIES FOLDER executables)

View File

@ -1,5 +1,6 @@
#include "lm/filter/arpa_io.hh"
#include "util/file_piece.hh"
#include "util/string_stream.hh"
#include <iostream>
#include <ostream>
@ -22,14 +23,8 @@ ARPAInputException::ARPAInputException(const StringPiece &message, const StringP
ARPAInputException::~ARPAInputException() throw() {}
ARPAOutputException::ARPAOutputException(const char *message, const std::string &file_name) throw() {
*this << message << " in file " << file_name;
}
ARPAOutputException::~ARPAOutputException() throw() {}
// Seeking is the responsibility of the caller.
void WriteCounts(std::ostream &out, const std::vector<uint64_t> &number) {
template <class Stream> void WriteCounts(Stream &out, const std::vector<uint64_t> &number) {
out << "\n\\data\\\n";
for (unsigned int i = 0; i < number.size(); ++i) {
out << "ngram " << i+1 << "=" << number[i] << '\n';
@ -38,9 +33,10 @@ void WriteCounts(std::ostream &out, const std::vector<uint64_t> &number) {
}
size_t SizeNeededForCounts(const std::vector<uint64_t> &number) {
std::ostringstream buf;
WriteCounts(buf, number);
return buf.tellp();
std::string buf;
util::StringStream stream(buf);
WriteCounts(stream, number);
return buf.size();
}
bool IsEntirelyWhiteSpace(const StringPiece &line) {
@ -50,44 +46,21 @@ bool IsEntirelyWhiteSpace(const StringPiece &line) {
return true;
}
ARPAOutput::ARPAOutput(const char *name, size_t buffer_size) : file_name_(name), buffer_(new char[buffer_size]) {
try {
file_.exceptions(std::ostream::eofbit | std::ostream::failbit | std::ostream::badbit);
if (!file_.rdbuf()->pubsetbuf(buffer_.get(), buffer_size)) {
std::cerr << "Warning: could not enlarge buffer for " << name << std::endl;
buffer_.reset();
}
file_.open(name, std::ios::out | std::ios::binary);
} catch (const std::ios_base::failure &f) {
throw ARPAOutputException("Opening", file_name_);
}
}
ARPAOutput::ARPAOutput(const char *name, size_t buffer_size)
: file_backing_(util::CreateOrThrow(name)), file_(file_backing_.get(), buffer_size) {}
void ARPAOutput::ReserveForCounts(std::streampos reserve) {
try {
for (std::streampos i = 0; i < reserve; i += std::streampos(1)) {
file_ << '\n';
}
} catch (const std::ios_base::failure &f) {
throw ARPAOutputException("Writing blanks to reserve space for counts to ", file_name_);
for (std::streampos i = 0; i < reserve; i += std::streampos(1)) {
file_ << '\n';
}
}
void ARPAOutput::BeginLength(unsigned int length) {
fast_counter_ = 0;
try {
file_ << '\\' << length << "-grams:" << '\n';
} catch (const std::ios_base::failure &f) {
throw ARPAOutputException("Writing n-gram header to ", file_name_);
}
file_ << '\\' << length << "-grams:" << '\n';
}
void ARPAOutput::EndLength(unsigned int length) {
try {
file_ << '\n';
} catch (const std::ios_base::failure &f) {
throw ARPAOutputException("Writing blank at end of count list to ", file_name_);
}
file_ << '\n';
if (length > counts_.size()) {
counts_.resize(length);
}
@ -95,14 +68,10 @@ void ARPAOutput::EndLength(unsigned int length) {
}
void ARPAOutput::Finish() {
try {
file_ << "\\end\\\n";
file_.seekp(0);
WriteCounts(file_, counts_);
file_ << std::flush;
} catch (const std::ios_base::failure &f) {
throw ARPAOutputException("Finishing including writing counts at beginning to ", file_name_);
}
file_ << "\\end\\\n";
file_.seekp(0);
WriteCounts(file_, counts_);
file_.flush();
}
} // namespace lm

View File

@ -4,6 +4,7 @@
*/
#include "lm/read_arpa.hh"
#include "util/exception.hh"
#include "util/file_stream.hh"
#include "util/string_piece.hh"
#include "util/tokenize_piece.hh"
@ -28,17 +29,6 @@ class ARPAInputException : public util::Exception {
virtual ~ARPAInputException() throw();
};
class ARPAOutputException : public util::ErrnoException {
public:
ARPAOutputException(const char *prefix, const std::string &file_name) throw();
virtual ~ARPAOutputException() throw();
const std::string &File() const throw() { return file_name_; }
private:
const std::string file_name_;
};
// Handling for the counts of n-grams at the beginning of ARPA files.
size_t SizeNeededForCounts(const std::vector<uint64_t> &number);
@ -55,11 +45,7 @@ class ARPAOutput : boost::noncopyable {
void BeginLength(unsigned int length);
void AddNGram(const StringPiece &line) {
try {
file_ << line << '\n';
} catch (const std::ios_base::failure &f) {
throw ARPAOutputException("Writing an n-gram", file_name_);
}
file_ << line << '\n';
++fast_counter_;
}
@ -76,9 +62,8 @@ class ARPAOutput : boost::noncopyable {
void Finish();
private:
const std::string file_name_;
boost::scoped_array<char> buffer_;
std::fstream file_;
util::scoped_fd file_backing_;
util::FileStream file_;
size_t fast_counter_;
std::vector<uint64_t> counts_;
};

View File

@ -5,7 +5,7 @@
#include <iostream>
#include <string>
#include "util/fake_ofstream.hh"
#include "util/file_stream.hh"
#include "util/file.hh"
#include "util/file_piece.hh"
@ -28,7 +28,7 @@ class CountOutput : boost::noncopyable {
}
private:
util::FakeOFStream file_;
util::FileStream file_;
};
class CountBatch {

View File

@ -1,4 +1,4 @@
#include "util/fake_ofstream.hh"
#include "util/file_stream.hh"
#include "util/file_piece.hh"
#include "util/murmur_hash.hh"
#include "util/pool.hh"
@ -68,7 +68,7 @@ class TargetWords {
}
void Print() const {
util::FakeOFStream out(1);
util::FileStream out(1);
for (std::vector<boost::unordered_set<const char *> >::const_iterator i = vocab_.begin(); i != vocab_.end(); ++i) {
for (boost::unordered_set<const char *>::const_iterator j = i->begin(); j != i->end(); ++j) {
out << *j << ' ';

View File

@ -1,5 +1,5 @@
#include "lm/model.hh"
#include "util/fake_ofstream.hh"
#include "util/file_stream.hh"
#include "util/file.hh"
#include "util/file_piece.hh"
#include "util/usage.hh"
@ -10,7 +10,7 @@ namespace {
template <class Model, class Width> void ConvertToBytes(const Model &model, int fd_in) {
util::FilePiece in(fd_in);
util::FakeOFStream out(1);
util::FileStream out(1);
Width width;
StringPiece word;
const Width end_sentence = (Width)model.GetVocabulary().EndSentence();
@ -30,12 +30,19 @@ template <class Model, class Width> void QueryFromBytes(const Model &model, int
const lm::ngram::State *next_state = begin_state;
Width kEOS = model.GetVocabulary().EndSentence();
Width buf[4096];
float sum = 0.0;
while (true) {
std::size_t got = util::ReadOrEOF(fd_in, buf, sizeof(buf));
if (!got) break;
uint64_t completed = 0;
double loaded = util::CPUTime();
std::cout << "CPU_to_load: " << loaded << std::endl;
// Numerical precision: batch sums.
double total = 0.0;
while (std::size_t got = util::ReadOrEOF(fd_in, buf, sizeof(buf))) {
float sum = 0.0;
UTIL_THROW_IF2(got % sizeof(Width), "File size not a multiple of vocab id size " << sizeof(Width));
got /= sizeof(Width);
completed += got;
// Do even stuff first.
const Width *even_end = buf + (got & ~1);
// Alternating states
@ -51,8 +58,13 @@ template <class Model, class Width> void QueryFromBytes(const Model &model, int
sum += model.FullScore(*next_state, *i, state[2]).prob;
next_state = (*i++ == kEOS) ? begin_state : &state[2];
}
total += sum;
}
std::cout << "Sum is " << sum << std::endl;
double after = util::CPUTime();
std::cerr << "Probability sum is " << total << std::endl;
std::cout << "Queries: " << completed << std::endl;
std::cout << "CPU_excluding_load: " << (after - loaded) << "\nCPU_per_query: " << ((after - loaded) / static_cast<double>(completed)) << std::endl;
std::cout << "RSSMax: " << util::RSSMax() << std::endl;
}
template <class Model, class Width> void DispatchFunction(const Model &model, bool query) {
@ -64,7 +76,10 @@ template <class Model, class Width> void DispatchFunction(const Model &model, bo
}
template <class Model> void DispatchWidth(const char *file, bool query) {
Model model(file);
lm::ngram::Config config;
config.load_method = util::READ;
std::cerr << "Using load_method = READ." << std::endl;
Model model(file, config);
lm::WordIndex bound = model.GetVocabulary().Bound();
if (bound <= 256) {
DispatchFunction<Model, uint8_t>(model, query);
@ -118,11 +133,10 @@ int main(int argc, char *argv[]) {
<< argv[0] << " vocab $model <$text >$text.vocab\n"
<< "#Ensure files are in RAM.\n"
<< "cat $text.vocab $model >/dev/null\n"
<< "#Timed query against the model, including loading.\n"
<< "time " << argv[0] << " query $model <$text.vocab\n";
<< "#Timed query against the model.\n"
<< argv[0] << " query $model <$text.vocab\n";
return 1;
}
Dispatch(argv[2], !strcmp(argv[1], "query"));
util::PrintUsage(std::cerr);
return 0;
}

View File

@ -3,7 +3,7 @@
#include "lm/enumerate_vocab.hh"
#include "lm/model.hh"
#include "util/fake_ofstream.hh"
#include "util/file_stream.hh"
#include "util/file_piece.hh"
#include "util/usage.hh"
@ -42,7 +42,7 @@ class QueryPrinter {
}
private:
util::FakeOFStream out_;
util::FileStream out_;
bool print_word_;
bool print_line_;
bool print_summary_;

View File

@ -6,7 +6,7 @@
#include "lm/config.hh"
#include "lm/weights.hh"
#include "util/exception.hh"
#include "util/fake_ofstream.hh"
#include "util/file_stream.hh"
#include "util/file.hh"
#include "util/joint_sort.hh"
#include "util/murmur_hash.hh"
@ -182,7 +182,7 @@ void SortedVocabulary::ComputeRenumbering(WordIndex types, int from_words, int t
std::sort(entries.begin(), entries.end());
// Write out new vocab file.
{
util::FakeOFStream out(to_words);
util::FileStream out(to_words);
out << "<unk>" << '\0';
for (std::vector<RenumberEntry>::const_iterator i = entries.begin(); i != entries.end(); ++i) {
out << i->str << '\0';

View File

@ -4,7 +4,7 @@
#include "lm/enumerate_vocab.hh"
#include "lm/lm_exception.hh"
#include "lm/virtual_interface.hh"
#include "util/fake_ofstream.hh"
#include "util/file_stream.hh"
#include "util/murmur_hash.hh"
#include "util/pool.hh"
#include "util/probing_hash_table.hh"
@ -44,7 +44,7 @@ class ImmediateWriteWordsWrapper : public EnumerateVocab {
private:
EnumerateVocab *inner_;
util::FakeOFStream stream_;
util::FileStream stream_;
};
// When the binary size isn't known yet.
@ -225,7 +225,7 @@ class WriteUniqueWords {
}
private:
util::FakeOFStream word_list_;
util::FileStream word_list_;
};
class NoOpUniqueWords {

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT
@ -141,4 +141,4 @@ void alignmentStruct::set(alignmentStruct l_alignmentStruct)
// }
}
}

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT
@ -49,4 +49,4 @@ public:
}
#endif
#endif

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT
@ -234,4 +234,4 @@ string terAlignment::printAllShifts()
return to_return.str();
}
}
}

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT
@ -165,4 +165,4 @@ int terShift::size()
// }
}
}

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -6,7 +6,7 @@ Contact: christophe.servan@lium.univ-lemans.fr
The tercpp tool and library are free software: you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 3 of the licence, or
the Free Software Foundation, either version 2.1 of the licence, or
(at your option) any later version.
This program and library are distributed in the hope that it will be useful, but WITHOUT

View File

@ -1,21 +1,20 @@
/*
* Copyright (C) 2009 Felipe Sánchez-Martínez
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
* 02111-1307, USA.
*/
/***********************************************************************
Copyright (C) 2009 Felipe Sánchez-Martínez
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include <string>
#include <vector>

1
mmt Submodule

@ -0,0 +1 @@
Subproject commit 1edfd04a1b62f93dffd3d0cbf724d6b255c8a61c

View File

@ -110,13 +110,13 @@ public:
BackwardsEdge::BackwardsEdge(const BitmapContainer &prevBitmapContainer
, BitmapContainer &parent
, const TranslationOptionList &translations
, const SquareMatrix &futureScore,
, const SquareMatrix &futureScores,
const InputType& itype)
: m_initialized(false)
, m_prevBitmapContainer(prevBitmapContainer)
, m_parent(parent)
, m_translations(translations)
, m_futurescore(futureScore)
, m_futureScores(futureScores)
, m_seenPosition()
{
@ -195,6 +195,10 @@ BackwardsEdge::Initialize()
return;
}
const WordsBitmap &bm = m_hypotheses[0]->GetWordsBitmap();
const WordsRange &newRange = m_translations.Get(0)->GetSourceWordsRange();
m_futureScore = m_futureScores.CalcFutureScore2(bm, newRange.GetStartPos(), newRange.GetEndPos());
Hypothesis *expanded = CreateHypothesis(*m_hypotheses[0], *m_translations.Get(0));
m_parent.Enqueue(0, 0, expanded, this);
SetSeenPosition(0, 0);
@ -211,7 +215,7 @@ Hypothesis *BackwardsEdge::CreateHypothesis(const Hypothesis &hypothesis, const
IFVERBOSE(2) {
hypothesis.GetManager().GetSentenceStats().StopTimeBuildHyp();
}
newHypo->EvaluateWhenApplied(m_futurescore);
newHypo->EvaluateWhenApplied(m_futureScore);
return newHypo;
}
@ -273,9 +277,11 @@ BackwardsEdge::PushSuccessors(const size_t x, const size_t y)
////////////////////////////////////////////////////////////////////////////////
BitmapContainer::BitmapContainer(const WordsBitmap &bitmap
, HypothesisStackCubePruning &stack)
, HypothesisStackCubePruning &stack
, bool deterministic)
: m_bitmap(bitmap)
, m_stack(stack)
, m_deterministic(deterministic)
, m_numStackInsertions(0)
{
m_hypotheses = HypothesisSet();
@ -309,10 +315,13 @@ BitmapContainer::Enqueue(int hypothesis_pos
, Hypothesis *hypothesis
, BackwardsEdge *edge)
{
// Only supply target phrase if running deterministic search mode
const TargetPhrase *target_phrase = m_deterministic ? &(hypothesis->GetCurrTargetPhrase()) : NULL;
HypothesisQueueItem *item = new HypothesisQueueItem(hypothesis_pos
, translation_pos
, hypothesis
, edge);
, edge
, target_phrase);
IFVERBOSE(2) {
item->GetHypothesis()->GetManager().GetSentenceStats().StartTimeManageCubes();
}

View File

@ -61,6 +61,7 @@ private:
size_t m_hypothesis_pos, m_translation_pos;
Hypothesis *m_hypothesis;
BackwardsEdge *m_edge;
boost::shared_ptr<TargetPhrase> m_target_phrase;
HypothesisQueueItem();
@ -68,11 +69,15 @@ public:
HypothesisQueueItem(const size_t hypothesis_pos
, const size_t translation_pos
, Hypothesis *hypothesis
, BackwardsEdge *edge)
, BackwardsEdge *edge
, const TargetPhrase *target_phrase = NULL)
: m_hypothesis_pos(hypothesis_pos)
, m_translation_pos(translation_pos)
, m_hypothesis(hypothesis)
, m_edge(edge) {
if (target_phrase != NULL) {
m_target_phrase.reset(new TargetPhrase(*target_phrase));
}
}
~HypothesisQueueItem() {
@ -93,6 +98,10 @@ public:
BackwardsEdge *GetBackwardsEdge() {
return m_edge;
}
boost::shared_ptr<TargetPhrase> GetTargetPhrase() {
return m_target_phrase;
}
};
//! Allows comparison of two HypothesisQueueItem objects by the corresponding scores.
@ -103,20 +112,20 @@ public:
float scoreA = itemA->GetHypothesis()->GetTotalScore();
float scoreB = itemB->GetHypothesis()->GetTotalScore();
return (scoreA < scoreB);
/*
{
return true;
if (scoreA < scoreB) {
return true;
} else if (scoreA > scoreB) {
return false;
} else {
// Equal scores: break ties by comparing target phrases (if they exist)
boost::shared_ptr<TargetPhrase> phrA = itemA->GetTargetPhrase();
boost::shared_ptr<TargetPhrase> phrB = itemB->GetTargetPhrase();
if (!phrA || !phrB) {
// Fallback: scoreA < scoreB == false, non-deterministic sort
return false;
}
return (phrA->Compare(*phrB) < 0);
}
else if (scoreA < scoreB)
{
return false;
}
else
{
return itemA < itemB;
}*/
}
};
@ -134,18 +143,6 @@ public:
float scoreB = hypoB->GetTotalScore();
return (scoreA > scoreB);
/*
{
return true;
}
else if (scoreA < scoreB)
{
return false;
}
else
{
return hypoA < hypoB;
}*/
}
};
@ -164,7 +161,8 @@ private:
const BitmapContainer &m_prevBitmapContainer;
BitmapContainer &m_parent;
const TranslationOptionList &m_translations;
const SquareMatrix &m_futurescore;
const SquareMatrix &m_futureScores;
float m_futureScore;
std::vector< const Hypothesis* > m_hypotheses;
boost::unordered_set< int > m_seenPosition;
@ -183,7 +181,7 @@ public:
BackwardsEdge(const BitmapContainer &prevBitmapContainer
, BitmapContainer &parent
, const TranslationOptionList &translations
, const SquareMatrix &futureScore,
, const SquareMatrix &futureScores,
const InputType& source);
~BackwardsEdge();
@ -210,13 +208,15 @@ private:
BackwardsEdgeSet m_edges;
HypothesisQueue m_queue;
size_t m_numStackInsertions;
bool m_deterministic;
// We always require a corresponding bitmap to be supplied.
BitmapContainer();
BitmapContainer(const BitmapContainer &);
public:
BitmapContainer(const WordsBitmap &bitmap
, HypothesisStackCubePruning &stack);
, HypothesisStackCubePruning &stack
, bool deterministic_sort = false);
// The destructor will also delete all the edges that are
// connected to this BitmapContainer.

90
moses/FF/DeleteRules.cpp Normal file
View File

@ -0,0 +1,90 @@
#include <vector>
#include "DeleteRules.h"
#include "moses/ScoreComponentCollection.h"
#include "moses/TargetPhrase.h"
#include "moses/InputFileStream.h"
#include "util/exception.hh"
using namespace std;
namespace Moses
{
DeleteRules::DeleteRules(const std::string &line)
:StatelessFeatureFunction(1, line)
{
m_tuneable = false;
ReadParameters();
}
void DeleteRules::Load()
{
std::vector<FactorType> factorOrder;
factorOrder.push_back(0); // unfactored for now
InputFileStream strme(m_path);
string line;
while (getline(strme, line)) {
vector<string> toks = TokenizeMultiCharSeparator(line, "|||");
UTIL_THROW_IF2(toks.size() != 2, "Line must be source ||| target");
Phrase source, target;
source.CreateFromString(Input, factorOrder, toks[0], NULL);
target.CreateFromString(Output, factorOrder, toks[1], NULL);
size_t hash = 0;
boost::hash_combine(hash, source);
boost::hash_combine(hash, target);
m_ruleHashes.insert(hash);
}
}
void DeleteRules::EvaluateInIsolation(const Phrase &source
, const TargetPhrase &target
, ScoreComponentCollection &scoreBreakdown
, ScoreComponentCollection &estimatedFutureScore) const
{
// dense scores
size_t hash = 0;
boost::hash_combine(hash, source);
boost::hash_combine(hash, target);
boost::unordered_set<size_t>::const_iterator iter;
iter = m_ruleHashes.find(hash);
if (iter != m_ruleHashes.end()) {
scoreBreakdown.PlusEquals(this, -std::numeric_limits<float>::infinity());
}
}
void DeleteRules::EvaluateWithSourceContext(const InputType &input
, const InputPath &inputPath
, const TargetPhrase &targetPhrase
, const StackVec *stackVec
, ScoreComponentCollection &scoreBreakdown
, ScoreComponentCollection *estimatedFutureScore) const
{}
void DeleteRules::EvaluateTranslationOptionListWithSourceContext(const InputType &input
, const TranslationOptionList &translationOptionList) const
{}
void DeleteRules::EvaluateWhenApplied(const Hypothesis& hypo,
ScoreComponentCollection* accumulator) const
{}
void DeleteRules::EvaluateWhenApplied(const ChartHypothesis &hypo,
ScoreComponentCollection* accumulator) const
{}
void DeleteRules::SetParameter(const std::string& key, const std::string& value)
{
if (key == "path") {
m_path = value;
} else {
StatelessFeatureFunction::SetParameter(key, value);
}
}
}

49
moses/FF/DeleteRules.h Normal file
View File

@ -0,0 +1,49 @@
#pragma once
#include <string>
#include <boost/unordered_set.hpp>
#include "StatelessFeatureFunction.h"
namespace Moses
{
class DeleteRules : public StatelessFeatureFunction
{
protected:
std::string m_path;
boost::unordered_set<size_t> m_ruleHashes;
public:
DeleteRules(const std::string &line);
void Load();
bool IsUseable(const FactorMask &mask) const {
return true;
}
void EvaluateInIsolation(const Phrase &source
, const TargetPhrase &targetPhrase
, ScoreComponentCollection &scoreBreakdown
, ScoreComponentCollection &estimatedFutureScore) const;
void EvaluateWithSourceContext(const InputType &input
, const InputPath &inputPath
, const TargetPhrase &targetPhrase
, const StackVec *stackVec
, ScoreComponentCollection &scoreBreakdown
, ScoreComponentCollection *estimatedFutureScore = NULL) const;
void EvaluateTranslationOptionListWithSourceContext(const InputType &input
, const TranslationOptionList &translationOptionList) const;
void EvaluateWhenApplied(const Hypothesis& hypo,
ScoreComponentCollection* accumulator) const;
void EvaluateWhenApplied(const ChartHypothesis &hypo,
ScoreComponentCollection* accumulator) const;
void SetParameter(const std::string& key, const std::string& value);
};
}

View File

@ -6,6 +6,7 @@
#include "moses/TranslationModel/PhraseDictionaryMemory.h"
#include "moses/TranslationModel/PhraseDictionaryMultiModel.h"
#include "moses/TranslationModel/PhraseDictionaryMultiModelCounts.h"
#include "moses/TranslationModel/PhraseDictionaryGroup.h"
#include "moses/TranslationModel/PhraseDictionaryScope3.h"
#include "moses/TranslationModel/PhraseDictionaryTransliteration.h"
#include "moses/TranslationModel/PhraseDictionaryDynamicCacheBased.h"
@ -55,6 +56,7 @@
#include "NieceTerminal.h"
#include "SpanLength.h"
#include "SyntaxRHS.h"
#include "DeleteRules.h"
#include "moses/FF/SkeletonStatelessFF.h"
#include "moses/FF/SkeletonStatefulFF.h"
@ -71,6 +73,7 @@
#include "moses/FF/VW/VWFeatureSourceBigrams.h"
#include "moses/FF/VW/VWFeatureSourceIndicator.h"
#include "moses/FF/VW/VWFeatureSourcePhraseInternal.h"
#include "moses/FF/VW/VWFeatureSourceSenseWindow.h"
#include "moses/FF/VW/VWFeatureSourceWindow.h"
#include "moses/FF/VW/VWFeatureTargetBigrams.h"
#include "moses/FF/VW/VWFeatureTargetIndicator.h"
@ -213,6 +216,7 @@ FeatureRegistry::FeatureRegistry()
MOSES_FNAME(PhraseDictionaryScope3);
MOSES_FNAME(PhraseDictionaryMultiModel);
MOSES_FNAME(PhraseDictionaryMultiModelCounts);
MOSES_FNAME(PhraseDictionaryGroup);
MOSES_FNAME(PhraseDictionaryALSuffixArray);
// MOSES_FNAME(PhraseDictionaryDynSuffixArray);
MOSES_FNAME(PhraseDictionaryTransliteration);
@ -262,6 +266,7 @@ FeatureRegistry::FeatureRegistry()
MOSES_FNAME(SyntaxRHS);
MOSES_FNAME(PhraseOrientationFeature);
MOSES_FNAME(UnalignedWordCountFeature);
MOSES_FNAME(DeleteRules);
MOSES_FNAME(SkeletonStatelessFF);
MOSES_FNAME(SkeletonStatefulFF);
@ -275,6 +280,7 @@ FeatureRegistry::FeatureRegistry()
MOSES_FNAME(VWFeatureSourceBigrams);
MOSES_FNAME(VWFeatureSourceIndicator);
MOSES_FNAME(VWFeatureSourcePhraseInternal);
MOSES_FNAME(VWFeatureSourceSenseWindow);
MOSES_FNAME(VWFeatureSourceWindow);
MOSES_FNAME(VWFeatureTargetBigrams);
MOSES_FNAME(VWFeatureTargetPhraseInternal);

View File

@ -0,0 +1,141 @@
#pragma once
#include <string>
#include <algorithm>
#include <boost/foreach.hpp>
#include "ThreadLocalByFeatureStorage.h"
#include "VWFeatureSource.h"
#include "moses/Util.h"
/*
* Produces features from factors in the following format:
* wordsense1:0.25^wordsense1:0.7^wordsense3:0.05
*
* This is useful e.g. for including different possible word senses as features weighted
* by their probability.
*
* By default, features are extracted from a small context window around the current
* phrase and from within the phrase.
*/
namespace Moses
{
class VWFeatureSourceSenseWindow : public VWFeatureSource
{
public:
VWFeatureSourceSenseWindow(const std::string &line)
: VWFeatureSource(line), m_tlsSenses(this), m_tlsForms(this), m_lexicalized(true), m_size(DEFAULT_WINDOW_SIZE) {
ReadParameters();
// Call this last
VWFeatureBase::UpdateRegister();
}
// precompute feature strings for each input sentence
virtual void InitializeForInput(ttasksptr const& ttask) {
InputType const& input = *(ttask->GetSource().get());
std::vector<WordSenses>& senses = *m_tlsSenses.GetStored();
std::vector<std::string>& forms = *m_tlsForms.GetStored();
senses.clear();
forms.clear();
senses.resize(input.GetSize());
forms.resize(input.GetSize());
for (size_t i = 0; i < input.GetSize(); i++) {
senses[i] = GetSenses(input, i);
forms[i] = m_lexicalized ? GetWordForm(input, i) + "^" : "";
}
}
void operator()(const InputType &input
, const InputPath &inputPath
, const WordsRange &sourceRange
, Discriminative::Classifier &classifier) const {
int begin = sourceRange.GetStartPos();
int end = sourceRange.GetEndPos() + 1;
int inputLen = input.GetSize();
const std::vector<WordSenses>& senses = *m_tlsSenses.GetStored();
const std::vector<std::string>& forms = *m_tlsForms.GetStored();
// before current phrase
for (int i = std::max(0, begin - m_size); i < begin; i++) {
BOOST_FOREACH(const Sense &sense, senses[i]) {
classifier.AddLabelIndependentFeature("snsb^" + forms[i] + SPrint(i - begin) + "^" + sense.m_label, sense.m_prob);
classifier.AddLabelIndependentFeature("snsb^" + forms[i] + sense.m_label, sense.m_prob);
}
}
// within current phrase
for (int i = begin; i < end; i++) {
BOOST_FOREACH(const Sense &sense, senses[i]) {
classifier.AddLabelIndependentFeature("snsin^" + forms[i] + SPrint(i - begin) + "^" + sense.m_label, sense.m_prob);
classifier.AddLabelIndependentFeature("snsin^" + forms[i] + sense.m_label, sense.m_prob);
}
}
// after current phrase
for (int i = end; i < std::min(end + m_size, inputLen); i++) {
BOOST_FOREACH(const Sense &sense, senses[i]) {
classifier.AddLabelIndependentFeature("snsa^" + forms[i] + SPrint(i - begin) + "^" + sense.m_label, sense.m_prob);
classifier.AddLabelIndependentFeature("snsa^" + forms[i] + sense.m_label, sense.m_prob);
}
}
}
virtual void SetParameter(const std::string& key, const std::string& value) {
if (key == "size") {
m_size = Scan<size_t>(value);
} else if (key == "lexicalized") {
m_lexicalized = Scan<bool>(value);
} else {
VWFeatureSource::SetParameter(key, value);
}
}
private:
static const int DEFAULT_WINDOW_SIZE = 3;
struct Sense {
std::string m_label;
float m_prob;
};
typedef std::vector<Sense> WordSenses;
typedef ThreadLocalByFeatureStorage<std::vector<WordSenses> > TLSSenses;
typedef ThreadLocalByFeatureStorage<std::vector<std::string> > TLSWordForms;
TLSSenses m_tlsSenses; // for each input sentence, contains extracted senses and probs for each word
TLSWordForms m_tlsForms; // word forms for each input sentence
std::vector<Sense> GetSenses(const InputType &input, size_t pos) const {
std::string w = GetWord(input, pos);
std::vector<std::string> senseTokens = Tokenize(w, "^");
std::vector<Sense> out(senseTokens.size());
for (size_t i = 0; i < senseTokens.size(); i++) {
std::vector<std::string> senseColumns = Tokenize(senseTokens[i], ":");
if (senseColumns.size() != 2) {
UTIL_THROW2("VW :: bad format of sense distribution: " << senseTokens[i]);
}
out[i].m_label = senseColumns[0];
out[i].m_prob = Scan<float>(senseColumns[1]);
}
return out;
}
// assuming that word surface form is always factor 0, output the word form
inline std::string GetWordForm(const InputType &input, size_t pos) const {
return input.GetWord(pos).GetString(0).as_string();
}
bool m_lexicalized;
int m_size;
};
}

View File

@ -1,23 +1,21 @@
/*
Moses - factored phrase-based language decoder
Copyright (C) 2010 University of Edinburgh
Moses - statistical machine translation system
Copyright (C) 2005-2015 University of Edinburgh
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <algorithm>
#include <cmath>

View File

@ -1,22 +1,21 @@
/*
Moses - factored phrase-based language decoder
Copyright (C) 2010 University of Edinburgh
Moses - statistical machine translation system
Copyright (C) 2005-2015 University of Edinburgh
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#pragma once
#ifndef FEATUREVECTOR_H

View File

@ -255,7 +255,7 @@ EvaluateWhenApplied(const StatelessFeatureFunction& slff)
*/
void
Hypothesis::
EvaluateWhenApplied(const SquareMatrix &futureScore)
EvaluateWhenApplied(float futureScore)
{
IFVERBOSE(2) {
m_manager.GetSentenceStats().StartTimeOtherScore();
@ -292,7 +292,7 @@ EvaluateWhenApplied(const SquareMatrix &futureScore)
}
// FUTURE COST
m_futureScore = futureScore.CalcFutureScore( m_sourceCompleted );
m_futureScore = futureScore;
// TOTAL
m_totalScore = m_currScoreBreakdown.GetWeightedScore() + m_futureScore;

View File

@ -146,7 +146,7 @@ public:
return m_currTargetWordsRange.GetNumWordsCovered();
}
void EvaluateWhenApplied(const SquareMatrix &futureScore);
void EvaluateWhenApplied(float futureScore);
int GetId()const {
return m_id;

View File

@ -39,6 +39,7 @@ HypothesisStackCubePruning::HypothesisStackCubePruning(Manager& manager) :
m_nBestIsEnabled = StaticData::Instance().options().nbest.enabled;
m_bestScore = -std::numeric_limits<float>::infinity();
m_worstScore = -std::numeric_limits<float>::infinity();
m_deterministic = manager.options().cube.deterministic_search;
}
/** remove all hypotheses from the collection */
@ -148,7 +149,7 @@ void HypothesisStackCubePruning::AddInitial(Hypothesis *hypo)
"Should have added hypothesis " << *hypo);
const WordsBitmap &bitmap = hypo->GetWordsBitmap();
m_bitmapAccessor[bitmap] = new BitmapContainer(bitmap, *this);
m_bitmapAccessor[bitmap] = new BitmapContainer(bitmap, *this, m_deterministic);
}
void HypothesisStackCubePruning::PruneToSize(size_t newSize)
@ -258,7 +259,7 @@ void HypothesisStackCubePruning::SetBitmapAccessor(const WordsBitmap &newBitmap
BitmapContainer *bmContainer;
if (bcExists == m_bitmapAccessor.end()) {
bmContainer = new BitmapContainer(newBitmap, stack);
bmContainer = new BitmapContainer(newBitmap, stack, m_deterministic);
m_bitmapAccessor[newBitmap] = bmContainer;
} else {
bmContainer = bcExists->second;

View File

@ -52,6 +52,7 @@ protected:
float m_beamWidth; /**< minimum score due to threashold pruning */
size_t m_maxHypoStackSize; /**< maximum number of hypothesis allowed in this stack */
bool m_nBestIsEnabled; /**< flag to determine whether to keep track of old arcs */
bool m_deterministic; /**< flag to determine whether to sort hypotheses deterministically */
/** add hypothesis to stack. Prune if necessary.
* Returns false if equiv hypo exists in collection, otherwise returns true

View File

@ -88,9 +88,9 @@ if $(with-ldhtlm) {
local with-nplm = [ option.get "with-nplm" ] ;
if $(with-nplm) {
lib nplm : : <search>$(with-nplm)/lib <search>$(with-nplm)/lib64 ;
obj NeuralLMWrapper.o : NeuralLMWrapper.cpp nplm ..//headers : <include>$(with-nplm)/src <include>$(with-nplm)/3rdparty/eigen ;
obj BiLM_NPLM.o : bilingual-lm/BiLM_NPLM.cpp nplm ..//headers : <include>$(with-nplm)/src <include>$(with-nplm)/3rdparty/eigen <cxxflags>-fopenmp ;
obj RDLM.o : RDLM.cpp nplm ..//headers : <include>$(with-nplm)/src <include>$(with-nplm)/3rdparty/eigen ;
obj NeuralLMWrapper.o : NeuralLMWrapper.cpp nplm ..//headers : <include>$(with-nplm)/src <include>$(with-nplm)/3rdparty/eigen <define>NPLM_DOUBLE_PRECISION=0 ;
obj BiLM_NPLM.o : bilingual-lm/BiLM_NPLM.cpp nplm ..//headers : <include>$(with-nplm)/src <include>$(with-nplm)/3rdparty/eigen <cxxflags>-fopenmp <define>NPLM_DOUBLE_PRECISION=0 ;
obj RDLM.o : RDLM.cpp nplm ..//headers : <include>$(with-nplm)/src <include>$(with-nplm)/3rdparty/eigen <define>NPLM_DOUBLE_PRECISION=0 ;
alias neural : NeuralLMWrapper.o nplm : : : <cxxflags>-fopenmp <linkflags>-fopenmp <define>LM_NEURAL ;
alias bilinguallm : BiLM_NPLM.o nplm : : : <cxxflags>-fopenmp <linkflags>-fopenmp <define>LM_NEURAL ;
alias rdlm : RDLM.o nplm : : : <cxxflags>-fopenmp <linkflags>-fopenmp <define>LM_NEURAL ;

View File

@ -116,6 +116,7 @@ Parameter::Parameter()
AddParam(cube_opts,"cube-pruning-pop-limit", "cbp", "How many hypotheses should be popped for each stack. (default = 1000)");
AddParam(cube_opts,"cube-pruning-diversity", "cbd", "How many hypotheses should be created for each coverage. (default = 0)");
AddParam(cube_opts,"cube-pruning-lazy-scoring", "cbls", "Don't fully score a hypothesis until it is popped");
AddParam(cube_opts,"cube-pruning-deterministic-search", "cbds", "Break ties deterministically during search");
///////////////////////////////////////////////////////////////////////////////////////
// minimum bayes risk decoding

View File

@ -32,7 +32,14 @@ public:
} else if (scoreA > scoreB) {
return false;
} else {
return A < B;
// Equal scores: break ties by comparing target phrases (if they exist)
boost::shared_ptr<TargetPhrase> phrA = A->Top()->GetTargetPhrase();
boost::shared_ptr<TargetPhrase> phrB = B->Top()->GetTargetPhrase();
if (!phrA || !phrB) {
// Fallback: compare pointers, non-deterministic sort
return A < B;
}
return (phrA->Compare(*phrB) < 0);
}
}
};

View File

@ -248,14 +248,16 @@ ExpandAllHypotheses(const Hypothesis &hypothesis, size_t startPos, size_t endPos
// early discarding: check if hypothesis is too bad to build
// this idea is explained in (Moore&Quirk, MT Summit 2007)
float expectedScore = 0.0f;
const WordsBitmap &sourceCompleted = hypothesis.GetWordsBitmap();
float futureScore = m_transOptColl.GetFutureScore().CalcFutureScore2( sourceCompleted, startPos, endPos );
if (m_options.search.UseEarlyDiscarding()) {
// expected score is based on score of current hypothesis
expectedScore = hypothesis.GetScore();
// add new future score estimate
expectedScore +=
m_transOptColl.GetFutureScore()
.CalcFutureScore(hypothesis.GetWordsBitmap(), startPos, endPos);
expectedScore += futureScore;
}
// loop through all translation options
@ -264,7 +266,7 @@ ExpandAllHypotheses(const Hypothesis &hypothesis, size_t startPos, size_t endPos
if (!tol) return;
TranslationOptionList::const_iterator iter;
for (iter = tol->begin() ; iter != tol->end() ; ++iter) {
ExpandHypothesis(hypothesis, **iter, expectedScore);
ExpandHypothesis(hypothesis, **iter, expectedScore, futureScore);
}
}
@ -277,7 +279,10 @@ ExpandAllHypotheses(const Hypothesis &hypothesis, size_t startPos, size_t endPos
* \param expectedScore base score for early discarding
* (base hypothesis score plus future score estimation)
*/
void SearchNormal::ExpandHypothesis(const Hypothesis &hypothesis, const TranslationOption &transOpt, float expectedScore)
void SearchNormal::ExpandHypothesis(const Hypothesis &hypothesis,
const TranslationOption &transOpt,
float expectedScore,
float futureScore)
{
const StaticData &staticData = StaticData::Instance();
SentenceStats &stats = m_manager.GetSentenceStats();
@ -293,7 +298,7 @@ void SearchNormal::ExpandHypothesis(const Hypothesis &hypothesis, const Translat
stats.StopTimeBuildHyp();
}
if (newHypo==NULL) return;
newHypo->EvaluateWhenApplied(m_transOptColl.GetFutureScore());
newHypo->EvaluateWhenApplied(futureScore);
} else
// early discarding: check if hypothesis is too bad to build
{

View File

@ -44,8 +44,10 @@ protected:
ExpandAllHypotheses(const Hypothesis &hypothesis, size_t startPos, size_t endPos);
virtual void
ExpandHypothesis(const Hypothesis &hypothesis, const TranslationOption &transOpt,
float expectedScore);
ExpandHypothesis(const Hypothesis &hypothesis,
const TranslationOption &transOpt,
float expectedScore,
float futureScore);
public:
SearchNormal(Manager& manager, const InputType &source, const TranslationOptionCollection &transOptColl);

View File

@ -76,7 +76,7 @@ float SquareMatrix::CalcFutureScore( WordsBitmap const &bitmap ) const
* /param endPos end of the span that is added to the coverage
*/
float SquareMatrix::CalcFutureScore( WordsBitmap const &bitmap, size_t startPos, size_t endPos ) const
float SquareMatrix::CalcFutureScore2( WordsBitmap const &bitmap, size_t startPos, size_t endPos ) const
{
const size_t notInGap= numeric_limits<size_t>::max();
float futureScore = 0.0f;

View File

@ -62,7 +62,7 @@ public:
m_array[startPos * m_size + endPos] = value;
}
float CalcFutureScore( WordsBitmap const& ) const;
float CalcFutureScore( WordsBitmap const&, size_t startPos, size_t endPos ) const;
float CalcFutureScore2( WordsBitmap const&, size_t startPos, size_t endPos ) const;
TO_STRING();
};

View File

@ -29,6 +29,7 @@ void Manager::OutputBest(OutputCollector *collector) const
if (StaticData::Instance().GetOutputHypoScore()) {
out << "0 ";
}
out << '\n';
} else {
if (StaticData::Instance().GetOutputHypoScore()) {
out << best->label.score << " ";

View File

@ -0,0 +1,211 @@
/***********************************************************************
Moses - factored phrase-based language decoder
Copyright (C) 2006 University of Edinburgh
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#include "moses/TranslationModel/PhraseDictionaryGroup.h"
#include <boost/foreach.hpp>
#include <boost/unordered_map.hpp>
#include "util/exception.hh"
using namespace std;
using namespace boost;
namespace Moses
{
PhraseDictionaryGroup::PhraseDictionaryGroup(const string &line)
: PhraseDictionary(line, true),
m_numModels(0),
m_restrict(false)
{
ReadParameters();
}
void PhraseDictionaryGroup::SetParameter(const string& key, const string& value)
{
if (key == "members") {
m_memberPDStrs = Tokenize(value, ",");
m_numModels = m_memberPDStrs.size();
} else if (key == "restrict") {
m_restrict = Scan<bool>(value);
} else {
PhraseDictionary::SetParameter(key, value);
}
}
void PhraseDictionaryGroup::Load()
{
SetFeaturesToApply();
m_pdFeature.push_back(const_cast<PhraseDictionaryGroup*>(this));
// Locate/check component phrase tables
size_t componentWeights = 0;
BOOST_FOREACH(const string& pdName, m_memberPDStrs) {
bool pdFound = false;
BOOST_FOREACH(PhraseDictionary* pd, PhraseDictionary::GetColl()) {
if (pd->GetScoreProducerDescription() == pdName) {
pdFound = true;
m_memberPDs.push_back(pd);
componentWeights += pd->GetNumScoreComponents();
}
}
UTIL_THROW_IF2(!pdFound,
"Could not find component phrase table " << pdName);
}
UTIL_THROW_IF2(componentWeights != m_numScoreComponents,
"Total number of component model scores is unequal to specified number of scores");
}
void PhraseDictionaryGroup::GetTargetPhraseCollectionBatch(
const ttasksptr& ttask, const InputPathList& inputPathQueue) const
{
// Some implementations (mmsapt) do work in PrefixExists
BOOST_FOREACH(const InputPath* inputPath, inputPathQueue) {
const Phrase& phrase = inputPath->GetPhrase();
BOOST_FOREACH(const PhraseDictionary* pd, m_memberPDs) {
pd->PrefixExists(ttask, phrase);
}
}
// Look up each input in each model
BOOST_FOREACH(InputPath* inputPath, inputPathQueue) {
const Phrase &phrase = inputPath->GetPhrase();
const TargetPhraseCollection* targetPhrases =
this->GetTargetPhraseCollectionLEGACY(ttask, phrase);
inputPath->SetTargetPhrases(*this, targetPhrases, NULL);
}
}
const TargetPhraseCollection* PhraseDictionaryGroup::GetTargetPhraseCollectionLEGACY(
const Phrase& src) const
{
UTIL_THROW2("Don't call me without the translation task.");
}
const TargetPhraseCollection* PhraseDictionaryGroup::GetTargetPhraseCollectionLEGACY(
const ttasksptr& ttask, const Phrase& src) const
{
TargetPhraseCollection* ret = CreateTargetPhraseCollection(ttask, src);
ret->NthElement(m_tableLimit); // sort the phrases for pruning later
const_cast<PhraseDictionaryGroup*>(this)->CacheForCleanup(ret);
return ret;
}
TargetPhraseCollection* PhraseDictionaryGroup::CreateTargetPhraseCollection(
const ttasksptr& ttask, const Phrase& src) const
{
// Aggregation of phrases and the scores that will be applied to them
vector<TargetPhrase*> allPhrases;
unordered_map<const TargetPhrase*, vector<float>, PhrasePtrHasher,
PhrasePtrComparator> allScores;
// For each model
size_t offset = 0;
for (size_t i = 0; i < m_numModels; ++i) {
// Collect phrases from this table
const PhraseDictionary& pd = *m_memberPDs[i];
const TargetPhraseCollection* ret_raw = pd.GetTargetPhraseCollectionLEGACY(
ttask, src);
if (ret_raw != NULL) {
// Process each phrase from table
BOOST_FOREACH(const TargetPhrase* targetPhrase, *ret_raw) {
vector<float> raw_scores =
targetPhrase->GetScoreBreakdown().GetScoresForProducer(&pd);
// Phrase not in collection -> add if unrestricted or first model
if (allScores.find(targetPhrase) == allScores.end()) {
if (m_restrict && i > 0) {
continue;
}
// Copy phrase to avoid disrupting base model
TargetPhrase* phrase = new TargetPhrase(*targetPhrase);
// Correct future cost estimates and total score
phrase->GetScoreBreakdown().InvertDenseFeatures(&pd);
vector<FeatureFunction*> pd_feature;
pd_feature.push_back(m_memberPDs[i]);
const vector<FeatureFunction*> pd_feature_const(pd_feature);
phrase->EvaluateInIsolation(src, pd_feature_const);
// Zero out scores from original phrase table
phrase->GetScoreBreakdown().ZeroDenseFeatures(&pd);
// Add phrase entry
allPhrases.push_back(phrase);
allScores[targetPhrase] = vector<float>(m_numScoreComponents, 0);
}
vector<float>& scores = allScores.find(targetPhrase)->second;
// Copy scores from this model
for (size_t j = 0; j < pd.GetNumScoreComponents(); ++j) {
scores[offset + j] = raw_scores[j];
}
}
}
offset += pd.GetNumScoreComponents();
}
// Apply scores to phrases and add them to return collection
TargetPhraseCollection* ret = new TargetPhraseCollection();
const vector<FeatureFunction*> pd_feature_const(m_pdFeature);
BOOST_FOREACH(TargetPhrase* phrase, allPhrases) {
phrase->GetScoreBreakdown().Assign(this, allScores.find(phrase)->second);
// Correct future cost estimates and total score
phrase->EvaluateInIsolation(src, pd_feature_const);
ret->Add(phrase);
}
return ret;
}
ChartRuleLookupManager *PhraseDictionaryGroup::CreateRuleLookupManager(
const ChartParser &, const ChartCellCollectionBase&, size_t)
{
UTIL_THROW(util::Exception, "Phrase table used in chart decoder");
}
//copied from PhraseDictionaryCompact; free memory allocated to TargetPhraseCollection (and each TargetPhrase) at end of sentence
void PhraseDictionaryGroup::CacheForCleanup(TargetPhraseCollection* tpc)
{
PhraseCache &ref = GetPhraseCache();
ref.push_back(tpc);
}
void PhraseDictionaryGroup::CleanUpAfterSentenceProcessing(
const InputType &source)
{
PhraseCache &ref = GetPhraseCache();
for (PhraseCache::iterator it = ref.begin(); it != ref.end(); it++) {
delete *it;
}
PhraseCache temp;
temp.swap(ref);
CleanUpComponentModels(source);
}
void PhraseDictionaryGroup::CleanUpComponentModels(const InputType &source)
{
for (size_t i = 0; i < m_numModels; ++i) {
m_memberPDs[i]->CleanUpAfterSentenceProcessing(source);
}
}
} //namespace

View File

@ -0,0 +1,103 @@
/***********************************************************************
Moses - factored phrase-based language decoder
Copyright (C) 2006 University of Edinburgh
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************/
#ifndef moses_PhraseDictionaryGroup_h
#define moses_PhraseDictionaryGroup_h
#include "moses/TranslationModel/PhraseDictionary.h"
#include <boost/unordered_map.hpp>
#include <boost/thread/shared_mutex.hpp>
#include "moses/StaticData.h"
#include "moses/TargetPhrase.h"
#include "moses/Util.h"
namespace Moses
{
/** Combines multiple phrase tables into a single interface. Each member phrase
* table scores each phrase and a single set of translations/scores is returned.
* If a phrase is not in one of the tables, its scores are zero-filled. Use the
* "restrict" option to restrict phrases to those in the table-limit of the
* first member table, intended to be a "union" table built on all data.
*/
class PhraseDictionaryGroup: public PhraseDictionary
{
public:
PhraseDictionaryGroup(const std::string& line);
void Load();
TargetPhraseCollection* CreateTargetPhraseCollection(const ttasksptr& ttask,
const Phrase& src) const;
std::vector<std::vector<float> > getWeights(size_t numWeights,
bool normalize) const;
void CacheForCleanup(TargetPhraseCollection* tpc);
void CleanUpAfterSentenceProcessing(const InputType& source);
void CleanUpComponentModels(const InputType& source);
// functions below override the base class
void GetTargetPhraseCollectionBatch(const ttasksptr& ttask,
const InputPathList &inputPathQueue) const;
const TargetPhraseCollection* GetTargetPhraseCollectionLEGACY(
const Phrase& src) const;
const TargetPhraseCollection* GetTargetPhraseCollectionLEGACY(
const ttasksptr& ttask, const Phrase& src) const;
void InitializeForInput(ttasksptr const& ttask) {
/* Don't do anything source specific here as this object is shared between threads.*/
}
ChartRuleLookupManager* CreateRuleLookupManager(const ChartParser&,
const ChartCellCollectionBase&, std::size_t);
void SetParameter(const std::string& key, const std::string& value);
protected:
std::vector<std::string> m_memberPDStrs;
std::vector<PhraseDictionary*> m_memberPDs;
size_t m_numModels;
bool m_restrict;
std::vector<FeatureFunction*> m_pdFeature;
typedef std::vector<TargetPhraseCollection*> PhraseCache;
#ifdef WITH_THREADS
boost::shared_mutex m_lock_cache;
typedef std::map<boost::thread::id, PhraseCache> SentenceCache;
#else
typedef PhraseCache SentenceCache;
#endif
SentenceCache m_sentenceCache;
PhraseCache& GetPhraseCache() {
#ifdef WITH_THREADS
{
// first try read-only lock
boost::shared_lock<boost::shared_mutex> read_lock(m_lock_cache);
SentenceCache::iterator i = m_sentenceCache.find(
boost::this_thread::get_id());
if (i != m_sentenceCache.end())
return i->second;
}
boost::unique_lock<boost::shared_mutex> lock(m_lock_cache);
return m_sentenceCache[boost::this_thread::get_id()];
#else
return m_sentenceCache;
#endif
}
};
} // end namespace
#endif

View File

@ -36,16 +36,7 @@ PhraseDictionaryMultiModel::PhraseDictionaryMultiModel(const std::string &line)
m_pdStr.size()*numWeights != m_multimodelweights.size(),
"Number of scores and weights are not equal");
} else if (m_mode == "all" || m_mode == "all-restrict") {
size_t componentWeights = 0;
for(size_t i = 0; i < m_numModels; ++i) {
const string &ptName = m_pdStr[i];
PhraseDictionary *pt = FindPhraseDictionary(ptName);
UTIL_THROW_IF2(pt == NULL,
"Could not find component phrase table " << ptName);
componentWeights += pt->GetNumScoreComponents();
}
UTIL_THROW_IF2(componentWeights != m_numScoreComponents,
"Total number of component model scores is unequal to specified number of scores");
UTIL_THROW2("Implementation has moved: use PhraseDictionaryGroup with restrict=true/false");
} else {
ostringstream msg;
msg << "combination mode unknown: " << m_mode;
@ -100,25 +91,14 @@ void PhraseDictionaryMultiModel::Load()
const TargetPhraseCollection *PhraseDictionaryMultiModel::GetTargetPhraseCollectionLEGACY(const Phrase& src) const
{
std::vector<std::vector<float> > multimodelweights;
if (m_mode == "interpolate") {
multimodelweights = getWeights(m_numScoreComponents, true);
}
std::vector<std::vector<float> > multimodelweights = getWeights(m_numScoreComponents, true);
TargetPhraseCollection *ret = NULL;
if (m_mode == "interpolate") {
std::map<std::string,multiModelStatistics*>* allStats = new(std::map<std::string,multiModelStatistics*>);
CollectSufficientStatistics(src, allStats);
ret = CreateTargetPhraseCollectionLinearInterpolation(src, allStats, multimodelweights);
RemoveAllInMap(*allStats);
delete allStats;
} else if (m_mode == "all") {
ret = CreateTargetPhraseCollectionAll(src, false);
} else if (m_mode == "all-restrict") {
ret = CreateTargetPhraseCollectionAll(src, true);
}
std::map<std::string,multiModelStatistics*>* allStats = new(std::map<std::string,multiModelStatistics*>);
CollectSufficientStatistics(src, allStats);
ret = CreateTargetPhraseCollectionLinearInterpolation(src, allStats, multimodelweights);
RemoveAllInMap(*allStats);
delete allStats;
ret->NthElement(m_tableLimit); // sort the phrases for pruning later
const_cast<PhraseDictionaryMultiModel*>(this)->CacheForCleanup(ret);
@ -206,89 +186,6 @@ TargetPhraseCollection* PhraseDictionaryMultiModel::CreateTargetPhraseCollection
return ret;
}
TargetPhraseCollection* PhraseDictionaryMultiModel::CreateTargetPhraseCollectionAll(const Phrase& src, const bool restricted) const
{
// Collect phrases from all models
std::map<std::string, multiModelPhrase*> allPhrases;
size_t offset = 0;
for(size_t i = 0; i < m_numModels; ++i) {
const PhraseDictionary &pd = *m_pd[i];
TargetPhraseCollection *ret_raw = (TargetPhraseCollection*) pd.GetTargetPhraseCollectionLEGACY(src);
if (ret_raw != NULL) {
TargetPhraseCollection::iterator iterTargetPhrase, iterLast;
if (m_tableLimit != 0 && ret_raw->GetSize() > m_tableLimit) {
iterLast = ret_raw->begin() + m_tableLimit;
} else {
iterLast = ret_raw->end();
}
for (iterTargetPhrase = ret_raw->begin(); iterTargetPhrase != iterLast; ++iterTargetPhrase) {
const TargetPhrase* targetPhrase = *iterTargetPhrase;
std::vector<float> raw_scores = targetPhrase->GetScoreBreakdown().GetScoresForProducer(&pd);
std::string targetString = targetPhrase->GetStringRep(m_output);
// Phrase not in collection -> add if unrestricted (all) or first model (all-restrict)
if (allPhrases.find(targetString) == allPhrases.end()) {
// all-restrict and not first model: skip adding unseen phrase
if (restricted && i > 0) {
continue;
}
multiModelPhrase* phrase = new multiModelPhrase;
phrase->targetPhrase = new TargetPhrase(*targetPhrase); //make a copy so that we don't overwrite the original phrase table info
// p contains scores from all models in order. Values default to zero for models that do not contain phrase.
phrase->p.resize(m_numScoreComponents, 0);
//correct future cost estimates and total score
phrase->targetPhrase->GetScoreBreakdown().InvertDenseFeatures(&pd);
vector<FeatureFunction*> pd_feature;
pd_feature.push_back(m_pd[i]);
const vector<FeatureFunction*> pd_feature_const(pd_feature);
phrase->targetPhrase->EvaluateInIsolation(src, pd_feature_const);
// zero out scores from original phrase table
phrase->targetPhrase->GetScoreBreakdown().ZeroDenseFeatures(&pd);
allPhrases[targetString] = phrase;
}
multiModelPhrase* phrase = allPhrases[targetString];
for(size_t j = 0; j < pd.GetNumScoreComponents(); ++j) {
phrase->p[offset + j] = raw_scores[j];
}
}
}
offset += pd.GetNumScoreComponents();
}
// Copy accumulated score vectors to phrases
TargetPhraseCollection* ret = new TargetPhraseCollection();
for (std::map<std::string, multiModelPhrase*>::const_iterator iter = allPhrases.begin(); iter != allPhrases.end(); ++iter) {
multiModelPhrase* phrase = iter->second;
Scores scoreVector(m_numScoreComponents);
for(size_t i = 0; i < m_numScoreComponents; ++i) {
scoreVector[i] = phrase->p[i];
}
phrase->targetPhrase->GetScoreBreakdown().Assign(this, scoreVector);
//correct future cost estimates and total score
vector<FeatureFunction*> pd_feature;
pd_feature.push_back(const_cast<PhraseDictionaryMultiModel*>(this));
const vector<FeatureFunction*> pd_feature_const(pd_feature);
phrase->targetPhrase->EvaluateInIsolation(src, pd_feature_const);
ret->Add(new TargetPhrase(*phrase->targetPhrase));
}
RemoveAllInMap(allPhrases);
return ret;
}
//TODO: is it worth caching the results as long as weights don't change?
std::vector<std::vector<float> > PhraseDictionaryMultiModel::getWeights(size_t numWeights, bool normalize) const
{

View File

@ -73,7 +73,6 @@ public:
void Load();
virtual void CollectSufficientStatistics(const Phrase& src, std::map<std::string,multiModelStatistics*>* allStats) const;
virtual TargetPhraseCollection* CreateTargetPhraseCollectionLinearInterpolation(const Phrase& src, std::map<std::string,multiModelStatistics*>* allStats, std::vector<std::vector<float> > &multimodelweights) const;
virtual TargetPhraseCollection* CreateTargetPhraseCollectionAll(const Phrase& src, const bool restricted = false) const;
std::vector<std::vector<float> > getWeights(size_t numWeights, bool normalize) const;
std::vector<float> normalizeWeights(std::vector<float> &weights) const;
void CacheForCleanup(TargetPhraseCollection* tpc);

View File

@ -20,15 +20,15 @@
using namespace std;
using namespace ugdiss;
using namespace Moses;
typedef L2R_Token<SimpleWordId> Token;
typedef mmTSA<Token>::tree_iterator iter;
typedef sapt::L2R_Token<sapt::SimpleWordId> Token;
typedef sapt::mmTSA<Token>::tree_iterator iter;
typedef boost::unordered_map<pair<size_t,size_t>,size_t> phrase_counter_t;
#define CACHING_THRESHOLD 1000
mmTtrack<Token> T; // token tracks
TokenIndex V; // vocabs
mmTSA<Token> I; // suffix arrays
sapt::mmTtrack<Token> T; // token tracks
sapt::TokenIndex V; // vocabs
sapt::mmTSA<Token> I; // suffix arrays
void interpret_args(int ac, char* av[]);
string bname;

View File

@ -65,7 +65,6 @@ BitextSampler : public Moses::reference_counter
double m_bias_total;
bool consider_sample(TokenPosition const& p);
size_t perform_ranked_sampling();
size_t perform_random_sampling();
int check_sample_distribution(uint64_t const& sid, uint64_t const& offset);
@ -78,9 +77,13 @@ public:
SPTR<SamplingBias const> const& bias, size_t const max_samples,
sampling_method const method);
~BitextSampler();
bool operator()(); // run sampling
SPTR<pstats> stats();
bool done() const;
#ifdef MMT
#include "mmt_bitext_sampler-inc.h"
#else
bool operator()(); // run sampling
#endif
};
template<typename Token>
@ -219,28 +222,6 @@ BitextSampler(BitextSampler const& other)
m_finished = other.m_finished;
}
// Ranked sampling sorts all samples by score and then considers the top-ranked
// candidates for phrase extraction.
template<typename Token>
size_t
BitextSampler<Token>::
perform_ranked_sampling()
{
if (m_next == m_stop) return m_ctr;
CandidateSorter sorter(*m_bias);
// below: nbest size = 4 * m_samples to allow for failed phrase extraction
Moses::NBestList<TokenPosition, CandidateSorter> nbest(4*m_samples, sorter);
sapt::tsa::ArrayEntry I(m_next);
while (I.next < m_stop)
{
++m_ctr;
nbest.add(m_root->readEntry(I.next, I));
}
for (size_t i = 0; m_stats->good < m_samples && i < nbest.size(); ++i)
consider_sample(nbest[i]);
return m_ctr;
}
// Uniform sampling
template<typename Token>
size_t
@ -331,6 +312,7 @@ consider_sample(TokenPosition const& p)
return true;
}
#ifndef MMT
template<typename Token>
bool
BitextSampler<Token>::
@ -338,15 +320,14 @@ operator()()
{
if (m_finished) return true;
boost::unique_lock<boost::mutex> lock(m_lock);
if (m_method == ranked_sampling)
perform_ranked_sampling();
else if (m_method == random_sampling)
if (m_method == random_sampling)
perform_random_sampling();
else UTIL_THROW2("Unsupported sampling method.");
m_finished = true;
m_ready.notify_all();
return true;
}
#endif
template<typename Token>

View File

@ -2,8 +2,8 @@
// (c) 2007-2012 Ulrich Germann
// Token class for dependency trees, where the linear order
// of tokens is defined as going up a dependency chain
#ifndef __ug_conll_bottom_up_token_h
#define __ug_conll_bottok_up_token_h
#pragma once
#include "ug_typedefs.h"
namespace sapt
{
@ -53,4 +53,4 @@ namespace sapt
}
} // end of namespace ugdiss
#endif

View File

@ -74,6 +74,12 @@ namespace Moses
{
init(line);
setup_local_feature_functions();
// Set features used for scoring extracted phrases:
// * Use all features that can operate on input factors and this model's
// output factor
// * Don't use features that depend on generation steps that won't be run
// yet at extract time
SetFeaturesToApply();
Register();
}
@ -162,6 +168,10 @@ namespace Moses
parse_factor_spec(m_ifactor,"input-factor");
parse_factor_spec(m_ofactor,"output-factor");
// Masks for available factors that inform SetFeaturesToApply
m_inputFactors = FactorMask(m_ifactor);
m_outputFactors = FactorMask(m_ofactor);
pair<string,string> dflt = pair<string,string> ("smooth",".01");
m_lbop_conf = atof(param.insert(dflt).first->second.c_str());
@ -237,6 +247,8 @@ namespace Moses
{
if (m->second == "random")
m_sampling_method = random_sampling;
else if (m->second == "ranked")
m_sampling_method = ranked_sampling;
else if (m->second == "full")
m_sampling_method = full_coverage;
else UTIL_THROW2("unrecognized specification 'method='" << m->second
@ -575,7 +587,8 @@ namespace Moses
}
tp->SetAlignTerm(pool.aln);
tp->GetScoreBreakdown().Assign(this, fvals);
tp->EvaluateInIsolation(src);
// Evaluate with all features that can be computed using available factors
tp->EvaluateInIsolation(src, m_featuresToApply);
if (m_lr_func)
{
@ -826,9 +839,14 @@ namespace Moses
SPTR<ContextForQuery> context = scope->get<ContextForQuery>(btfix.get(), true);
// set sampling bias, depending on sampling method specified
#if 0
// for the time being, always use the external bias
if (m_sampling_method == random_sampling)
set_bias_via_server(ttask);
else UTIL_THROW2("Unknown sampling method: " << m_sampling_method);
#else
set_bias_via_server(ttask);
#endif
boost::unique_lock<boost::shared_mutex> mylock(m_lock);
SPTR<TPCollCache> localcache = scope->get<TPCollCache>(cache_key);

View File

@ -13,6 +13,7 @@ namespace Moses
param.SetParameter(diversity, "cube-pruning-diversity",
DEFAULT_CUBE_PRUNING_DIVERSITY);
param.SetParameter(lazy_scoring, "cube-pruning-lazy-scoring", false);
param.SetParameter(deterministic_search, "cube-pruning-deterministic-search", false);
return true;
}
@ -30,20 +31,37 @@ namespace Moses
if (si != params.end()) diversity = xmlrpc_c::value_int(si->second);
si = params.find("cube-pruning-lazy-scoring");
if (si != params.end())
{
std::string spec = xmlrpc_c::value_string(si->second);
if (spec == "true" or spec == "on" or spec == "1")
lazy_scoring = true;
else if (spec == "false" or spec == "off" or spec == "0")
lazy_scoring = false;
else
if (si != params.end())
{
char const* msg
= "Error parsing specification for cube-pruning-lazy-scoring";
xmlrpc_c::fault(msg, xmlrpc_c::fault::CODE_PARSE);
std::string spec = xmlrpc_c::value_string(si->second);
if (spec == "true" or spec == "on" or spec == "1")
lazy_scoring = true;
else if (spec == "false" or spec == "off" or spec == "0")
lazy_scoring = false;
else
{
char const* msg
= "Error parsing specification for cube-pruning-lazy-scoring";
xmlrpc_c::fault(msg, xmlrpc_c::fault::CODE_PARSE);
}
}
}
si = params.find("cube-pruning-deterministic-search");
if (si != params.end())
{
std::string spec = xmlrpc_c::value_string(si->second);
if (spec == "true" or spec == "on" or spec == "1")
deterministic_search = true;
else if (spec == "false" or spec == "off" or spec == "0")
deterministic_search = false;
else
{
char const* msg
= "Error parsing specification for cube-pruning-deterministic-search";
xmlrpc_c::fault(msg, xmlrpc_c::fault::CODE_PARSE);
}
}
return true;
}
#endif

View File

@ -12,6 +12,7 @@ namespace Moses
size_t pop_limit;
size_t diversity;
bool lazy_scoring;
bool deterministic_search;
bool init(Parameter const& param);
CubePruningOptions(Parameter const& param);

View File

@ -1,4 +1,4 @@
// -*- mode: c++; cc-style: gnu -*-
// -*- mode: c++; indent-tabs-mode: nil; tab-width: 2 -*-
#include "ServerOptions.h"
#include <boost/foreach.hpp>
#include <string>
@ -48,7 +48,7 @@ init(Parameter const& P)
P.SetParameter(this->is_serial, "serial", false);
P.SetParameter(this->logfile, "server-log", std::string("/dev/null"));
P.SetParameter(this->num_threads, "threads", uint32_t(10));
P.SetParameter(this->session_cache_size, "session-cache_size",25UL);
P.SetParameter(this->session_cache_size, "session-cache_size", size_t(25));
std::string timeout_spec;
P.SetParameter(timeout_spec, "session-timeout",std::string("30m"));
this->session_timeout = parse_timespec(timeout_spec);

View File

@ -137,6 +137,11 @@ void Subgraph::RecursivelyPrintTree(const Node *n, std::ostream &out) const
for (std::vector<Node *>::const_iterator p(children.begin());
p != children.end(); ++p) {
Node *child = *p;
if (child->GetType() == SOURCE) {
// This is possible due to the heuristic for attaching unaligned
// source words.
continue;
}
out << " ";
RecursivelyPrintTree(child,out);
}

View File

@ -21,7 +21,6 @@
#include <assert.h>
#include <cstdlib>
#include <cstring>
#include <list>
#include <map>
#include <set>
#include <vector>
@ -110,7 +109,7 @@ void writeLeftHandSideLabelCounts( const boost::unordered_map<std::string,float>
const std::string &fileNameLeftHandSideSourceLabelCounts,
const std::string &fileNameLeftHandSideTargetSourceLabelCounts );
void writeLabelSet( const std::set<std::string> &labelSet, const std::string &fileName );
void processPhrasePairs( std::list< ExtractionPhrasePair* > &phrasePairsWithSameSource, std::ostream &phraseTableFile,
void processPhrasePairs( std::vector< ExtractionPhrasePair* > &phrasePairsWithSameSource, std::ostream &phraseTableFile,
const ScoreFeatureManager& featureManager, const MaybeLog& maybeLogProb );
void outputPhrasePair(const ExtractionPhrasePair &phrasePair, float, int, std::ostream &phraseTableFile, const ScoreFeatureManager &featureManager, const MaybeLog &maybeLog );
double computeLexicalTranslation( const PHRASE *phraseSource, const PHRASE *phraseTarget, const ALIGNMENT *alignmentTargetToSource );
@ -346,8 +345,8 @@ int main(int argc, char* argv[])
// loop through all extracted phrase translations
std::string line, lastLine;
ExtractionPhrasePair *phrasePair = NULL;
std::list< ExtractionPhrasePair* > phrasePairsWithSameSource;
std::list< ExtractionPhrasePair* > phrasePairsWithSameSourceAndTarget; // required for hierarchical rules only, as non-terminal alignments might make the phrases incompatible
std::vector< ExtractionPhrasePair* > phrasePairsWithSameSource;
std::vector< ExtractionPhrasePair* > phrasePairsWithSameSourceAndTarget; // required for hierarchical rules only, as non-terminal alignments might make the phrases incompatible
int tmpSentenceId;
PHRASE *tmpPhraseSource, *tmpPhraseTarget;
@ -411,7 +410,7 @@ int main(int argc, char* argv[])
// once the first of them has been found to have to be set to false
if ( hierarchicalFlag ) {
for ( std::list< ExtractionPhrasePair* >::const_iterator iter = phrasePairsWithSameSourceAndTarget.begin();
for ( std::vector< ExtractionPhrasePair* >::const_iterator iter = phrasePairsWithSameSourceAndTarget.begin();
iter != phrasePairsWithSameSourceAndTarget.end(); ++iter ) {
if ( (*iter)->Matches( tmpPhraseSource, tmpPhraseTarget, tmpTargetToSourceAlignment,
sourceMatch, targetMatch, alignmentMatch ) ) {
@ -441,7 +440,7 @@ int main(int argc, char* argv[])
if ( !phrasePairsWithSameSource.empty() &&
!sourceMatch ) {
processPhrasePairs( phrasePairsWithSameSource, *phraseTableFile, featureManager, maybeLogProb );
for ( std::list< ExtractionPhrasePair* >::const_iterator iter=phrasePairsWithSameSource.begin();
for ( std::vector< ExtractionPhrasePair* >::const_iterator iter=phrasePairsWithSameSource.begin();
iter!=phrasePairsWithSameSource.end(); ++iter) {
delete *iter;
}
@ -476,7 +475,7 @@ int main(int argc, char* argv[])
std::cerr << std::endl;
processPhrasePairs( phrasePairsWithSameSource, *phraseTableFile, featureManager, maybeLogProb );
for ( std::list< ExtractionPhrasePair* >::const_iterator iter=phrasePairsWithSameSource.begin();
for ( std::vector< ExtractionPhrasePair* >::const_iterator iter=phrasePairsWithSameSource.begin();
iter!=phrasePairsWithSameSource.end(); ++iter) {
delete *iter;
}
@ -677,7 +676,7 @@ void writeLabelSet( const std::set<std::string> &labelSet, const std::string &fi
}
void processPhrasePairs( std::list< ExtractionPhrasePair* > &phrasePairsWithSameSource, std::ostream &phraseTableFile,
void processPhrasePairs( std::vector< ExtractionPhrasePair* > &phrasePairsWithSameSource, std::ostream &phraseTableFile,
const ScoreFeatureManager& featureManager, const MaybeLog& maybeLogProb )
{
if (phrasePairsWithSameSource.size() == 0) {
@ -689,14 +688,14 @@ void processPhrasePairs( std::list< ExtractionPhrasePair* > &phrasePairsWithSame
//std::cerr << "phrasePairs.size() = " << phrasePairs.size() << std::endl;
// loop through phrase pairs
for ( std::list< ExtractionPhrasePair* >::const_iterator iter=phrasePairsWithSameSource.begin();
for ( std::vector< ExtractionPhrasePair* >::const_iterator iter=phrasePairsWithSameSource.begin();
iter!=phrasePairsWithSameSource.end(); ++iter) {
// add to total count
totalSource += (*iter)->GetCount();
}
// output the distinct phrase pairs, one at a time
for ( std::list< ExtractionPhrasePair* >::const_iterator iter=phrasePairsWithSameSource.begin();
for ( std::vector< ExtractionPhrasePair* >::const_iterator iter=phrasePairsWithSameSource.begin();
iter!=phrasePairsWithSameSource.end(); ++iter) {
// add to total count
outputPhrasePair( **iter, totalSource, phrasePairsWithSameSource.size(), phraseTableFile, featureManager, maybeLogProb );

View File

@ -14,7 +14,7 @@ print STDERR "Training OSM - Start\n".`date`;
my $ORDER = 5;
my $OUT_DIR = "/tmp/osm.$$";
my $___FACTOR_DELIMITER = "|";
my ($MOSES_SRC_DIR,$CORPUS_F,$CORPUS_E,$ALIGNMENT,$SRILM_DIR,$FACTOR,$LMPLZ);
my ($MOSES_SRC_DIR,$CORPUS_F,$CORPUS_E,$ALIGNMENT,$SRILM_DIR,$FACTOR,$LMPLZ,$DOMAIN,$TUNE,$INP_EXT,$OP_EXT);
my $cmd;
@ -29,6 +29,10 @@ die("ERROR: wrong syntax when invoking OSM-Train.perl")
'alignment=s' => \$ALIGNMENT,
'order=i' => \$ORDER,
'factor=s' => \$FACTOR,
'input-extension=s' => \$INP_EXT,
'output-extension=s' => \$OP_EXT,
'tune=s' => \$TUNE,
'domain=s' => \$DOMAIN,
'srilm-dir=s' => \$SRILM_DIR,
'lmplz=s' => \$LMPLZ,
'out-dir=s' => \$OUT_DIR);
@ -74,19 +78,172 @@ if (defined($FACTOR)) {
`ln -s $corpus_stem_f.$factor_val.$ext_f $OUT_DIR/$factor_val/f`;
`ln -s $corpus_stem_e.$factor_val.$ext_e $OUT_DIR/$factor_val/e`;
create_model($factor_val);
if (defined($TUNE) && defined($DOMAIN) && $factor_val eq "0-0")
{
die("ERROR: For Interpolated OSM model, you need SRILM")
unless -e $SRILM_DIR;
`mkdir $OUT_DIR/TUNE`;
`$MOSES_SRC_DIR/scripts/training/reduce-factors.perl --corpus $TUNE.$INP_EXT --reduced $OUT_DIR/TUNE/tune.$INP_EXT --factor 0`;
`$MOSES_SRC_DIR/scripts/training/reduce-factors.perl --corpus $TUNE.$OP_EXT --reduced $OUT_DIR/TUNE/tune.$OP_EXT --factor 0`;
create_interpolated_model($factor_val);
}
else
{
create_model($factor_val);
}
}
}
else {
`ln -s $CORPUS_F $OUT_DIR/f`;
`ln -s $CORPUS_E $OUT_DIR/e`;
create_model("");
if (defined($TUNE) && defined($DOMAIN))
{
die("ERROR: For Interpolated OSM model, you need SRILM")
unless -e $SRILM_DIR;
`mkdir $OUT_DIR/TUNE`;
`cp $TUNE.$INP_EXT --reduced $OUT_DIR/TUNE/tune.$INP_EXT`;
`cp $TUNE.$OP_EXT --reduced $OUT_DIR/TUNE/tune.$OP_EXT`;
create_interpolated_model("");
}
else
{
create_model("");
}
}
# create model
print "Training OSM - End".`date`;
sub read_domain_file{
open(my $fh, '<:encoding(UTF-8)', $DOMAIN)
or die "Could not open file '$DOMAIN' $!";
my @corpora;
while (my $row = <$fh>) {
chomp $row;
my ($num,$dom) = split(/\ /,$row);
push @corpora, $dom;
push @corpora, $num;
#print "$dom $num\n";
}
return @corpora;
}
sub create_interpolated_model{
my ($factor_val) = @_;
my $fNum = 0;
my $dName;
my @corpora = read_domain_file();
my $i = 0;
while($i < scalar(@corpora))
{
$dName = "$OUT_DIR/$factor_val/$corpora[$i]";
$cmd = "mkdir $dName";
`$cmd`;
my $cal = $corpora[$i+1] - $fNum;
$cmd = "head -$corpora[$i+1] $OUT_DIR/$factor_val/e | tail -$cal > $dName/e";
`$cmd`;
$cmd = "head -$corpora[$i+1] $OUT_DIR/$factor_val/f | tail -$cal > $dName/f";
`$cmd`;
$cmd = "head -$corpora[$i+1] $OUT_DIR/align | tail -$cal > $dName/align";
`$cmd`;
#print STDERR "Flip Alignment\n";
#`$MOSES_SRC_DIR/scripts/OSM/flipAlignment.perl $dName/alignment > $dName/align`;
print STDERR "Extracting Singletons\n";
$cmd = "$MOSES_SRC_DIR/scripts/OSM/extract-singletons.perl $dName/e $dName/f $dName/align > $dName/Singletons";
print STDERR "Executing: $cmd\n";
`$cmd`;
print STDERR "Converting Bilingual Sentence Pair into Operation Corpus\n";
$cmd = "$MOSES_SRC_DIR/bin/generateSequences $dName/e $dName/f $dName/align $dName/Singletons > $dName/opCorpus";
print STDERR "Executing: $cmd\n";
`$cmd`;
print STDERR "Learning Operation Sequence Translation Model\n";
if (defined($SRILM_DIR)) {
$cmd = "$SRILM_DIR/ngram-count -kndiscount -order $ORDER -unk -text $dName/opCorpus -lm $dName/operationLM 2>> /dev/stderr";
print STDERR "Executing: $cmd\n";
`$cmd`;
}
else {
$cmd = "$LMPLZ -T $OUT_DIR --order $ORDER --text $dName/opCorpus --arpa $dName/operationLM --prune 0 0 1 2>> /dev/stderr";
print STDERR "Executing: $cmd\n";
`$cmd`;
}
print "$cmd\n";
$fNum = $corpora[$i+1];
$i = $i+2;
}
`$MOSES_SRC_DIR/scripts/OSM/flipAlignment.perl $TUNE.align > $OUT_DIR/TUNE/tune.align`;
print STDERR "Extracting Singletons\n";
$cmd = "$MOSES_SRC_DIR/scripts/OSM/extract-singletons.perl $OUT_DIR/TUNE/tune.$OP_EXT $OUT_DIR/TUNE/tune.$INP_EXT $OUT_DIR/TUNE/tune.align > $OUT_DIR/TUNE/Singletons";
print STDERR "Executing: $cmd\n";
`$cmd`;
print STDERR "Converting Bilingual Sentence Pair into Operation Corpus\n";
$cmd = "$MOSES_SRC_DIR/bin/generateSequences $OUT_DIR/TUNE/tune.$OP_EXT $OUT_DIR/TUNE/tune.$INP_EXT $OUT_DIR/TUNE/tune.align $OUT_DIR/TUNE/Singletons > $OUT_DIR/TUNE/tune.opCorpus";
print STDERR "Executing: $cmd\n";
`$cmd`;
print STDERR "Interpolating OSM Models\n";
$cmd = "$MOSES_SRC_DIR/scripts/ems/support/interpolate-lm.perl --tuning $OUT_DIR/TUNE/tune.opCorpus --name $OUT_DIR/$factor_val/operationLM --srilm $SRILM_DIR --lm ";
$i = 0;
$dName = "$OUT_DIR/$factor_val/$corpora[$i]/operationLM";
$cmd = $cmd . $dName;
$i = $i+2;
while($i < scalar(@corpora))
{
$cmd = $cmd . ",";
$dName = "$OUT_DIR/$factor_val/$corpora[$i]/operationLM";
$cmd = $cmd . $dName;
$i = $i+2;
}
print STDERR "Executing: $cmd\n";
`$cmd`;
print STDERR "Binarizing\n";
$cmd = "$MOSES_SRC_DIR/bin/build_binary $OUT_DIR/$factor_val/operationLM $OUT_DIR/$factor_val/operationLM.bin";
print STDERR "Executing: $cmd\n";
system($cmd) == 0 or die("system $cmd failed: $?");
}
sub create_model{
my ($factor_val) = @_;

View File

@ -391,6 +391,28 @@ alignment-symmetrization-method = grow-diag-final-and
#operation-sequence-model-order = 5
#operation-sequence-model-settings = "-lmplz '$moses-src-dir/bin/lmplz -S 40% -T $working-dir/model/tmp'"
#
# OR if you want to use with SRILM
#
#operation-sequence-model-settings = "--srilm-dir /path-to-srilm/bin/i686-m64"
## Class-based Operation Sequence Model (OSM)
# if OSM has to be enabled with factors then add factors as below.
# Durrani, Koehn, Schmid, Fraser (COLING, 2014).
#Investigating the Usefulness of Generalized Word Representations in SMT
#
#operation-sequence-model-settings = "--factor 0-0+1-1"
## Interpolated Operation Sequence Model (OSM)
# if OSM has to be enabled with factors then add factors as below.
# Durrani, Sajjad, Joty, Abdelali and Vogel (Mt Summit, 2015).
# Using Joint Models for Domain Adaptation in Statistical Machine Translation
#
#interpolated-operation-sequence-model = "yes"
#operation-sequence-model-order = 5
#operation-sequence-model-settings = "--srilm-dir /path-to-srilm/bin/i686-m64 --tune /path-to-tune-folder/tune_file"
#Interpolated OSM can only be used with SRILM because of the interpolation script
# if OSM training should be skipped, point to OSM Model
#osm-model =

View File

@ -411,9 +411,30 @@ alignment-symmetrization-method = grow-diag-final-and
#operation-sequence-model-order = 5
#operation-sequence-model-settings = "-lmplz '$moses-src-dir/bin/lmplz -S 40% -T $working-dir/model/tmp'"
#
# OR if you want to use with SRILM
#
#operation-sequence-model-settings = "--srilm-dir /path-to-srilm/bin/i686-m64"
## Class-based Operation Sequence Model (OSM)
# if OSM has to be enabled with factors then add factors as below.
# Durrani, Koehn, Schmid, Fraser (COLING, 2014).
#Investigating the Usefulness of Generalized Word Representations in SMT
#
#operation-sequence-model-settings = "--factor 0-0+1-1"
## Interpolated Operation Sequence Model (OSM)
# if OSM has to be enabled with factors then add factors as below.
# Durrani, Sajjad, Joty, Abdelali and Vogel (Mt Summit, 2015).
# Using Joint Models for Domain Adaptation in Statistical Machine Translation
#
#interpolated-operation-sequence-model = "yes"
#operation-sequence-model-order = 5
#operation-sequence-model-settings = "--srilm-dir /path-to-srilm/bin/i686-m64 --tune /path-to-tune-folder/tune_file"
#Interpolated OSM can only be used with SRILM because of the interpolation script
# if OSM training should be skipped, point to OSM Model
#osm-model =
### unsupervised transliteration module
# Durrani, Sajjad, Hoang and Koehn (EACL, 2014).
# "Integrating an Unsupervised Transliteration Model

View File

@ -373,8 +373,30 @@ alignment-symmetrization-method = grow-diag-final-and
#
#operation-sequence-model = "yes"
#operation-sequence-model-order = 5
#operation-sequence-model-settings = ""
#operation-sequence-model-settings = "-lmplz '$moses-src-dir/bin/lmplz -S 40% -T $working-dir/model/tmp'"
#
# OR if you want to use with SRILM
#
#operation-sequence-model-settings = "--srilm-dir /path-to-srilm/bin/i686-m64"
## Class-based Operation Sequence Model (OSM)
# if OSM has to be enabled with factors then add factors as below.
# Durrani, Koehn, Schmid, Fraser (COLING, 2014).
#Investigating the Usefulness of Generalized Word Representations in SMT
#
#operation-sequence-model-settings = "--factor 0-0+1-1"
## Interpolated Operation Sequence Model (OSM)
# if OSM has to be enabled with factors then add factors as below.
# Durrani, Sajjad, Joty, Abdelali and Vogel (Mt Summit, 2015).
# Using Joint Models for Domain Adaptation in Statistical Machine Translation
#
#interpolated-operation-sequence-model = "yes"
#operation-sequence-model-order = 5
#operation-sequence-model-settings = "--srilm-dir /path-to-srilm/bin/i686-m64 --tune /path-to-tune-folder/tune_file"
#Interpolated OSM can only be used with SRILM because of the interpolation script
# if OSM training should be skipped, point to OSM Model
#osm-model =

View File

@ -389,8 +389,30 @@ alignment-symmetrization-method = grow-diag-final-and
#
#operation-sequence-model = "yes"
#operation-sequence-model-order = 5
#operation-sequence-model-settings = ""
#operation-sequence-model-settings = "-lmplz '$moses-src-dir/bin/lmplz -S 40% -T $working-dir/model/tmp'"
#
# OR if you want to use with SRILM
#
#operation-sequence-model-settings = "--srilm-dir /path-to-srilm/bin/i686-m64"
## Class-based Operation Sequence Model (OSM)
# if OSM has to be enabled with factors then add factors as below.
# Durrani, Koehn, Schmid, Fraser (COLING, 2014).
#Investigating the Usefulness of Generalized Word Representations in SMT
#
#operation-sequence-model-settings = "--factor 0-0+1-1"
## Interpolated Operation Sequence Model (OSM)
# if OSM has to be enabled with factors then add factors as below.
# Durrani, Sajjad, Joty, Abdelali and Vogel (Mt Summit, 2015).
# Using Joint Models for Domain Adaptation in Statistical Machine Translation
#
#interpolated-operation-sequence-model = "yes"
#operation-sequence-model-order = 5
#operation-sequence-model-settings = "--srilm-dir /path-to-srilm/bin/i686-m64 --tune /path-to-tune-folder/tune_file"
#Interpolated OSM can only be used with SRILM because of the interpolation script
# if OSM training should be skipped, point to OSM Model
#osm-model =

Some files were not shown because too many files have changed in this diff Show More