mirror of
https://github.com/osm-search/Nominatim.git
synced 2024-10-05 15:08:32 +03:00
Correct some typos
This commit is contained in:
parent
918fec73c6
commit
7205491b84
@ -69,7 +69,7 @@ Before submitting a pull request make sure that the tests pass:
|
|||||||
|
|
||||||
Nominatim follows semantic versioning. Major releases are done for large changes
|
Nominatim follows semantic versioning. Major releases are done for large changes
|
||||||
that require (or at least strongly recommend) a reimport of the databases.
|
that require (or at least strongly recommend) a reimport of the databases.
|
||||||
Minor releases can usually be applied to exisiting databases. Patch releases
|
Minor releases can usually be applied to existing databases. Patch releases
|
||||||
contain bug fixes only and are released from a separate branch where the
|
contain bug fixes only and are released from a separate branch where the
|
||||||
relevant changes are cherry-picked from the master branch.
|
relevant changes are cherry-picked from the master branch.
|
||||||
|
|
||||||
|
12
ChangeLog
12
ChangeLog
@ -23,7 +23,7 @@
|
|||||||
* new documentation section for library
|
* new documentation section for library
|
||||||
* various smaller fixes to existing documentation
|
* various smaller fixes to existing documentation
|
||||||
(thanks @woodpeck, @bloom256, @biswajit-k)
|
(thanks @woodpeck, @bloom256, @biswajit-k)
|
||||||
* updates to vagrant install scripts, drop support for Ubunut 18
|
* updates to vagrant install scripts, drop support for Ubuntu 18
|
||||||
(thanks @n-timofeev)
|
(thanks @n-timofeev)
|
||||||
* removed obsolete configuration variables from env.defaults
|
* removed obsolete configuration variables from env.defaults
|
||||||
* add script for generating a taginfo description (thanks @biswajit-k)
|
* add script for generating a taginfo description (thanks @biswajit-k)
|
||||||
@ -240,7 +240,7 @@
|
|||||||
* increase splitting for large geometries to improve indexing speed
|
* increase splitting for large geometries to improve indexing speed
|
||||||
* remove deprecated get_magic_quotes_gpc() function
|
* remove deprecated get_magic_quotes_gpc() function
|
||||||
* make sure that all postcodes have an entry in word and are thus searchable
|
* make sure that all postcodes have an entry in word and are thus searchable
|
||||||
* remove use of ST_Covers in conjunction woth ST_Intersects,
|
* remove use of ST_Covers in conjunction with ST_Intersects,
|
||||||
causes bad query planning and slow updates in Postgis3
|
causes bad query planning and slow updates in Postgis3
|
||||||
* update osm2pgsql
|
* update osm2pgsql
|
||||||
|
|
||||||
@ -297,7 +297,7 @@
|
|||||||
* exclude postcode ranges separated by colon from centre point calculation
|
* exclude postcode ranges separated by colon from centre point calculation
|
||||||
* update osm2pgsql, better handling of imports without flatnode file
|
* update osm2pgsql, better handling of imports without flatnode file
|
||||||
* switch to more efficient algorithm for word set computation
|
* switch to more efficient algorithm for word set computation
|
||||||
* use only boundries for country and state parts of addresses
|
* use only boundaries for country and state parts of addresses
|
||||||
* improve updates of addresses with housenumbers and interpolations
|
* improve updates of addresses with housenumbers and interpolations
|
||||||
* remove country from place_addressline table and use country_code instead
|
* remove country from place_addressline table and use country_code instead
|
||||||
* optimise indexes on search_name partition tables
|
* optimise indexes on search_name partition tables
|
||||||
@ -336,7 +336,7 @@
|
|||||||
|
|
||||||
* complete rewrite of reverse search algorithm
|
* complete rewrite of reverse search algorithm
|
||||||
* add new geojson and geocodejson output formats
|
* add new geojson and geocodejson output formats
|
||||||
* add simple export script to exprot addresses to CSV
|
* add simple export script to export addresses to CSV
|
||||||
* remove is_in terms from address computation
|
* remove is_in terms from address computation
|
||||||
* remove unused search_name_country tables
|
* remove unused search_name_country tables
|
||||||
* various smaller fixes to query parsing
|
* various smaller fixes to query parsing
|
||||||
@ -401,7 +401,7 @@
|
|||||||
* move installation documentation into this repo
|
* move installation documentation into this repo
|
||||||
* add self-documenting vagrant scripts
|
* add self-documenting vagrant scripts
|
||||||
* remove --create-website, recommend to use website directory in build
|
* remove --create-website, recommend to use website directory in build
|
||||||
* add accessor functions for URL parameters and improve erro checking
|
* add accessor functions for URL parameters and improve error checking
|
||||||
* remove IP blocking and rate-limiting code
|
* remove IP blocking and rate-limiting code
|
||||||
* enable CI via travis
|
* enable CI via travis
|
||||||
* reformatting for more consistent coding style
|
* reformatting for more consistent coding style
|
||||||
@ -412,7 +412,7 @@
|
|||||||
* update to refactored osm2pgsql which use libosmium based types
|
* update to refactored osm2pgsql which use libosmium based types
|
||||||
* switch from osmosis to pyosmium for updates
|
* switch from osmosis to pyosmium for updates
|
||||||
* be more strict when matching against special search terms
|
* be more strict when matching against special search terms
|
||||||
* handle postcode entries with mutliple values correctly
|
* handle postcode entries with multiple values correctly
|
||||||
|
|
||||||
2.5
|
2.5
|
||||||
|
|
||||||
|
@ -44,7 +44,7 @@ Next you need to set up the service that runs the Nominatim frontend. This is
|
|||||||
easiest done with a systemd job.
|
easiest done with a systemd job.
|
||||||
|
|
||||||
First you need to tell systemd to create a socket file to be used by
|
First you need to tell systemd to create a socket file to be used by
|
||||||
hunicorn. Crate the following file `/etc/systemd/system/nominatim.socket`:
|
hunicorn. Create the following file `/etc/systemd/system/nominatim.socket`:
|
||||||
|
|
||||||
``` systemd
|
``` systemd
|
||||||
[Unit]
|
[Unit]
|
||||||
|
@ -165,7 +165,7 @@ The `railway` layer includes railway infrastructure like tracks.
|
|||||||
Note that in Nominatim's standard configuration, only very few railway
|
Note that in Nominatim's standard configuration, only very few railway
|
||||||
features are imported into the database.
|
features are imported into the database.
|
||||||
|
|
||||||
The `natural` layer collects feautures like rivers, lakes and mountains while
|
The `natural` layer collects features like rivers, lakes and mountains while
|
||||||
the `manmade` layer functions as a catch-all for features not covered by the
|
the `manmade` layer functions as a catch-all for features not covered by the
|
||||||
other layers.
|
other layers.
|
||||||
|
|
||||||
|
@ -179,7 +179,7 @@ also excluded when the filter is set.
|
|||||||
This parameter should not be confused with the 'country' parameter of
|
This parameter should not be confused with the 'country' parameter of
|
||||||
the structured query. The 'country' parameter contains a search term
|
the structured query. The 'country' parameter contains a search term
|
||||||
and will be handled with some fuzziness. The `countrycodes` parameter
|
and will be handled with some fuzziness. The `countrycodes` parameter
|
||||||
is a hard filter and as such should be prefered. Having both parameters
|
is a hard filter and as such should be preferred. Having both parameters
|
||||||
in the same query will work. If the parameters contradict each other,
|
in the same query will work. If the parameters contradict each other,
|
||||||
the search will come up empty.
|
the search will come up empty.
|
||||||
|
|
||||||
@ -203,7 +203,7 @@ The `railway` layer includes railway infrastructure like tracks.
|
|||||||
Note that in Nominatim's standard configuration, only very few railway
|
Note that in Nominatim's standard configuration, only very few railway
|
||||||
features are imported into the database.
|
features are imported into the database.
|
||||||
|
|
||||||
The `natural` layer collects feautures like rivers, lakes and mountains while
|
The `natural` layer collects features like rivers, lakes and mountains while
|
||||||
the `manmade` layer functions as a catch-all for features not covered by the
|
the `manmade` layer functions as a catch-all for features not covered by the
|
||||||
other layers.
|
other layers.
|
||||||
|
|
||||||
@ -217,7 +217,7 @@ the 'state', 'country' or 'city' part of an address. A featureType of
|
|||||||
settlement selects any human inhabited feature from 'state' down to
|
settlement selects any human inhabited feature from 'state' down to
|
||||||
'neighbourhood'.
|
'neighbourhood'.
|
||||||
|
|
||||||
When featureType ist set, then results are automatically restricted
|
When featureType is set, then results are automatically restricted
|
||||||
to the address layer (see above).
|
to the address layer (see above).
|
||||||
|
|
||||||
!!! tip
|
!!! tip
|
||||||
@ -227,7 +227,7 @@ to the address layer (see above).
|
|||||||
|
|
||||||
| Parameter | Value | Default |
|
| Parameter | Value | Default |
|
||||||
|-----------| ----- | ------- |
|
|-----------| ----- | ------- |
|
||||||
| exclude_place_ids | comma-separeted list of place ids |
|
| exclude_place_ids | comma-separated list of place ids |
|
||||||
|
|
||||||
If you do not want certain OSM objects to appear in the search
|
If you do not want certain OSM objects to appear in the search
|
||||||
result, give a comma separated list of the `place_id`s you want to skip.
|
result, give a comma separated list of the `place_id`s you want to skip.
|
||||||
@ -248,7 +248,7 @@ box. `x` is longitude, `y` is latitude.
|
|||||||
| bounded | 0 or 1 | 0 |
|
| bounded | 0 or 1 | 0 |
|
||||||
|
|
||||||
When set to 1, then it turns the 'viewbox' parameter (see above) into
|
When set to 1, then it turns the 'viewbox' parameter (see above) into
|
||||||
a filter paramter, excluding any results outside the viewbox.
|
a filter parameter, excluding any results outside the viewbox.
|
||||||
|
|
||||||
When `bounded=1` is given and the viewbox is small enough, then an amenity-only
|
When `bounded=1` is given and the viewbox is small enough, then an amenity-only
|
||||||
search is allowed. Give the special keyword for the amenity in square
|
search is allowed. Give the special keyword for the amenity in square
|
||||||
|
@ -280,7 +280,7 @@ kinds of geometries can be used:
|
|||||||
* __relation_as_multipolygon__ creates a (Multi)Polygon from the ways in
|
* __relation_as_multipolygon__ creates a (Multi)Polygon from the ways in
|
||||||
the relation. If the ways do not form a valid area, then the object is
|
the relation. If the ways do not form a valid area, then the object is
|
||||||
silently discarded.
|
silently discarded.
|
||||||
* __relation_as_multiline__ creates a (Mutli)LineString from the ways in
|
* __relation_as_multiline__ creates a (Multi)LineString from the ways in
|
||||||
the relations. Ways are combined as much as possible without any regards
|
the relations. Ways are combined as much as possible without any regards
|
||||||
to their order in the relation.
|
to their order in the relation.
|
||||||
|
|
||||||
|
@ -394,7 +394,7 @@ The analyzer cannot be customized.
|
|||||||
##### Postcode token analyzer
|
##### Postcode token analyzer
|
||||||
|
|
||||||
The analyzer `postcodes` is pupose-made to analyze postcodes. It supports
|
The analyzer `postcodes` is pupose-made to analyze postcodes. It supports
|
||||||
a 'lookup' varaint of the token, which produces variants with optional
|
a 'lookup' variant of the token, which produces variants with optional
|
||||||
spaces. Use together with the clean-postcodes sanitizer.
|
spaces. Use together with the clean-postcodes sanitizer.
|
||||||
|
|
||||||
The analyzer cannot be customized.
|
The analyzer cannot be customized.
|
||||||
|
@ -129,7 +129,7 @@ sanitizers:
|
|||||||
!!! warning
|
!!! warning
|
||||||
This example is just a simplified show case on how to create a sanitizer.
|
This example is just a simplified show case on how to create a sanitizer.
|
||||||
It is not really read for real-world use: while the sanitizer would
|
It is not really read for real-world use: while the sanitizer would
|
||||||
correcly transform `West 5th Street` into `5th Street`. it would also
|
correctly transform `West 5th Street` into `5th Street`. it would also
|
||||||
shorten a simple `North Street` to `Street`.
|
shorten a simple `North Street` to `Street`.
|
||||||
|
|
||||||
For more sanitizer examples, have a look at the sanitizers provided by Nominatim.
|
For more sanitizer examples, have a look at the sanitizers provided by Nominatim.
|
||||||
|
@ -10,7 +10,7 @@ There are two kind of tests in this test suite. There are functional tests
|
|||||||
which test the API interface using a BDD test framework and there are unit
|
which test the API interface using a BDD test framework and there are unit
|
||||||
tests for specific PHP functions.
|
tests for specific PHP functions.
|
||||||
|
|
||||||
This test directory is sturctured as follows:
|
This test directory is structured as follows:
|
||||||
|
|
||||||
```
|
```
|
||||||
-+- bdd Functional API tests
|
-+- bdd Functional API tests
|
||||||
|
@ -18,7 +18,7 @@ elseif (has 'addr:place'?) then (yes)
|
|||||||
**with same name**;
|
**with same name**;
|
||||||
kill
|
kill
|
||||||
else (no)
|
else (no)
|
||||||
:add addr:place to adress;
|
:add addr:place to address;
|
||||||
:**Use closest place**\n**rank 16 to 25**;
|
:**Use closest place**\n**rank 16 to 25**;
|
||||||
kill
|
kill
|
||||||
endif
|
endif
|
||||||
|
File diff suppressed because one or more lines are too long
Before Width: | Height: | Size: 9.8 KiB After Width: | Height: | Size: 9.8 KiB |
@ -347,7 +347,7 @@ BEGIN
|
|||||||
END LOOP;
|
END LOOP;
|
||||||
END IF;
|
END IF;
|
||||||
|
|
||||||
-- consider parts before an opening braket a full word as well
|
-- consider parts before an opening bracket a full word as well
|
||||||
words := regexp_split_to_array(value, E'[(]');
|
words := regexp_split_to_array(value, E'[(]');
|
||||||
IF array_upper(words, 1) > 1 THEN
|
IF array_upper(words, 1) > 1 THEN
|
||||||
s := make_standard_name(words[1]);
|
s := make_standard_name(words[1]);
|
||||||
|
@ -374,7 +374,7 @@ class NominatimAPI:
|
|||||||
""" Close all active connections to the database.
|
""" Close all active connections to the database.
|
||||||
|
|
||||||
This function also closes the asynchronous worker loop making
|
This function also closes the asynchronous worker loop making
|
||||||
the NominatimAPI object unusuable.
|
the NominatimAPI object unusable.
|
||||||
"""
|
"""
|
||||||
self._loop.run_until_complete(self._async_api.close())
|
self._loop.run_until_complete(self._async_api.close())
|
||||||
self._loop.close()
|
self._loop.close()
|
||||||
@ -447,7 +447,7 @@ class NominatimAPI:
|
|||||||
place. Only meaning full for POI-like objects (places with a
|
place. Only meaning full for POI-like objects (places with a
|
||||||
rank_address of 30).
|
rank_address of 30).
|
||||||
linked_place_id (Optional[int]): Internal ID of the place this object
|
linked_place_id (Optional[int]): Internal ID of the place this object
|
||||||
linkes to. When this ID is set then there is no guarantee that
|
links to. When this ID is set then there is no guarantee that
|
||||||
the rest of the result information is complete.
|
the rest of the result information is complete.
|
||||||
admin_level (int): Value of the `admin_level` OSM tag. Only meaningful
|
admin_level (int): Value of the `admin_level` OSM tag. Only meaningful
|
||||||
for administrative boundary objects.
|
for administrative boundary objects.
|
||||||
|
@ -84,7 +84,7 @@ class BaseLogger:
|
|||||||
def format_sql(self, conn: AsyncConnection, statement: 'sa.Executable',
|
def format_sql(self, conn: AsyncConnection, statement: 'sa.Executable',
|
||||||
extra_params: Union[Mapping[str, Any],
|
extra_params: Union[Mapping[str, Any],
|
||||||
Sequence[Mapping[str, Any]], None]) -> str:
|
Sequence[Mapping[str, Any]], None]) -> str:
|
||||||
""" Return the comiled version of the statement.
|
""" Return the compiled version of the statement.
|
||||||
"""
|
"""
|
||||||
compiled = cast('sa.ClauseElement', statement).compile(conn.sync_engine)
|
compiled = cast('sa.ClauseElement', statement).compile(conn.sync_engine)
|
||||||
|
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
# Copyright (C) 2023 by the Nominatim developer community.
|
# Copyright (C) 2023 by the Nominatim developer community.
|
||||||
# For a full list of authors see the git log.
|
# For a full list of authors see the git log.
|
||||||
"""
|
"""
|
||||||
Helper classes and functions for formating results into API responses.
|
Helper classes and functions for formatting results into API responses.
|
||||||
"""
|
"""
|
||||||
from typing import Type, TypeVar, Dict, List, Callable, Any, Mapping
|
from typing import Type, TypeVar, Dict, List, Callable, Any, Mapping
|
||||||
from collections import defaultdict
|
from collections import defaultdict
|
||||||
|
@ -466,7 +466,7 @@ async def add_result_details(conn: SearchConnection, results: List[BaseResultT],
|
|||||||
|
|
||||||
|
|
||||||
def _result_row_to_address_row(row: SaRow, isaddress: Optional[bool] = None) -> AddressLine:
|
def _result_row_to_address_row(row: SaRow, isaddress: Optional[bool] = None) -> AddressLine:
|
||||||
""" Create a new AddressLine from the results of a datbase query.
|
""" Create a new AddressLine from the results of a database query.
|
||||||
"""
|
"""
|
||||||
extratags: Dict[str, str] = getattr(row, 'extratags', {}) or {}
|
extratags: Dict[str, str] = getattr(row, 'extratags', {}) or {}
|
||||||
if 'linked_place' in extratags:
|
if 'linked_place' in extratags:
|
||||||
|
@ -175,7 +175,7 @@ class ReverseGeocoder:
|
|||||||
t = self.conn.t.placex
|
t = self.conn.t.placex
|
||||||
|
|
||||||
# PostgreSQL must not get the distance as a parameter because
|
# PostgreSQL must not get the distance as a parameter because
|
||||||
# there is a danger it won't be able to proberly estimate index use
|
# there is a danger it won't be able to properly estimate index use
|
||||||
# when used with prepared statements
|
# when used with prepared statements
|
||||||
diststr = sa.text(f"{distance}")
|
diststr = sa.text(f"{distance}")
|
||||||
|
|
||||||
|
@ -94,7 +94,7 @@ class RankedTokens:
|
|||||||
|
|
||||||
def with_token(self, t: Token, transition_penalty: float) -> 'RankedTokens':
|
def with_token(self, t: Token, transition_penalty: float) -> 'RankedTokens':
|
||||||
""" Create a new RankedTokens list with the given token appended.
|
""" Create a new RankedTokens list with the given token appended.
|
||||||
The tokens penalty as well as the given transision penalty
|
The tokens penalty as well as the given transition penalty
|
||||||
are added to the overall penalty.
|
are added to the overall penalty.
|
||||||
"""
|
"""
|
||||||
return RankedTokens(self.penalty + t.penalty + transition_penalty,
|
return RankedTokens(self.penalty + t.penalty + transition_penalty,
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
# Copyright (C) 2023 by the Nominatim developer community.
|
# Copyright (C) 2023 by the Nominatim developer community.
|
||||||
# For a full list of authors see the git log.
|
# For a full list of authors see the git log.
|
||||||
"""
|
"""
|
||||||
Implementation of the acutal database accesses for forward search.
|
Implementation of the actual database accesses for forward search.
|
||||||
"""
|
"""
|
||||||
from typing import List, Tuple, AsyncIterator, Dict, Any, Callable, cast
|
from typing import List, Tuple, AsyncIterator, Dict, Any, Callable, cast
|
||||||
import abc
|
import abc
|
||||||
|
@ -44,7 +44,7 @@ class LegacyToken(qmod.Token):
|
|||||||
|
|
||||||
@property
|
@property
|
||||||
def info(self) -> Dict[str, Any]:
|
def info(self) -> Dict[str, Any]:
|
||||||
""" Dictionary of additional propoerties of the token.
|
""" Dictionary of additional properties of the token.
|
||||||
Should only be used for debugging purposes.
|
Should only be used for debugging purposes.
|
||||||
"""
|
"""
|
||||||
return {'category': self.category,
|
return {'category': self.category,
|
||||||
|
@ -169,7 +169,7 @@ class TokenList:
|
|||||||
|
|
||||||
@dataclasses.dataclass
|
@dataclasses.dataclass
|
||||||
class QueryNode:
|
class QueryNode:
|
||||||
""" A node of the querry representing a break between terms.
|
""" A node of the query representing a break between terms.
|
||||||
"""
|
"""
|
||||||
btype: BreakType
|
btype: BreakType
|
||||||
ptype: PhraseType
|
ptype: PhraseType
|
||||||
|
@ -19,7 +19,7 @@ if TYPE_CHECKING:
|
|||||||
from nominatim.api.search.query import Phrase, QueryStruct
|
from nominatim.api.search.query import Phrase, QueryStruct
|
||||||
|
|
||||||
class AbstractQueryAnalyzer(ABC):
|
class AbstractQueryAnalyzer(ABC):
|
||||||
""" Class for analysing incomming queries.
|
""" Class for analysing incoming queries.
|
||||||
|
|
||||||
Query analyzers are tied to the tokenizer used on import.
|
Query analyzers are tied to the tokenizer used on import.
|
||||||
"""
|
"""
|
||||||
|
@ -72,7 +72,7 @@ class TokenAssignment: # pylint: disable=too-many-instance-attributes
|
|||||||
|
|
||||||
|
|
||||||
class _TokenSequence:
|
class _TokenSequence:
|
||||||
""" Working state used to put together the token assignements.
|
""" Working state used to put together the token assignments.
|
||||||
|
|
||||||
Represents an intermediate state while traversing the tokenized
|
Represents an intermediate state while traversing the tokenized
|
||||||
query.
|
query.
|
||||||
@ -132,7 +132,7 @@ class _TokenSequence:
|
|||||||
|
|
||||||
# Name tokens are always acceptable and don't change direction
|
# Name tokens are always acceptable and don't change direction
|
||||||
if ttype == qmod.TokenType.PARTIAL:
|
if ttype == qmod.TokenType.PARTIAL:
|
||||||
# qualifiers cannot appear in the middle of the qeury. They need
|
# qualifiers cannot appear in the middle of the query. They need
|
||||||
# to be near the next phrase.
|
# to be near the next phrase.
|
||||||
if self.direction == -1 \
|
if self.direction == -1 \
|
||||||
and any(t.ttype == qmod.TokenType.QUALIFIER for t in self.seq[:-1]):
|
and any(t.ttype == qmod.TokenType.QUALIFIER for t in self.seq[:-1]):
|
||||||
@ -238,10 +238,10 @@ class _TokenSequence:
|
|||||||
|
|
||||||
def recheck_sequence(self) -> bool:
|
def recheck_sequence(self) -> bool:
|
||||||
""" Check that the sequence is a fully valid token assignment
|
""" Check that the sequence is a fully valid token assignment
|
||||||
and addapt direction and penalties further if necessary.
|
and adapt direction and penalties further if necessary.
|
||||||
|
|
||||||
This function catches some impossible assignments that need
|
This function catches some impossible assignments that need
|
||||||
forward context and can therefore not be exluded when building
|
forward context and can therefore not be excluded when building
|
||||||
the assignment.
|
the assignment.
|
||||||
"""
|
"""
|
||||||
# housenumbers may not be further than 2 words from the beginning.
|
# housenumbers may not be further than 2 words from the beginning.
|
||||||
@ -277,10 +277,10 @@ class _TokenSequence:
|
|||||||
# <address>,<postcode> should give preference to address search
|
# <address>,<postcode> should give preference to address search
|
||||||
if base.postcode.start == 0:
|
if base.postcode.start == 0:
|
||||||
penalty = self.penalty
|
penalty = self.penalty
|
||||||
self.direction = -1 # name searches are only possbile backwards
|
self.direction = -1 # name searches are only possible backwards
|
||||||
else:
|
else:
|
||||||
penalty = self.penalty + 0.1
|
penalty = self.penalty + 0.1
|
||||||
self.direction = 1 # name searches are only possbile forwards
|
self.direction = 1 # name searches are only possible forwards
|
||||||
yield dataclasses.replace(base, penalty=penalty)
|
yield dataclasses.replace(base, penalty=penalty)
|
||||||
|
|
||||||
|
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
# Copyright (C) 2023 by the Nominatim developer community.
|
# Copyright (C) 2023 by the Nominatim developer community.
|
||||||
# For a full list of authors see the git log.
|
# For a full list of authors see the git log.
|
||||||
"""
|
"""
|
||||||
Classes and function releated to status call.
|
Classes and function related to status call.
|
||||||
"""
|
"""
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
import datetime as dt
|
import datetime as dt
|
||||||
|
@ -316,7 +316,7 @@ class DataLayer(enum.Flag):
|
|||||||
for reverse and forward search.
|
for reverse and forward search.
|
||||||
"""
|
"""
|
||||||
ADDRESS = enum.auto()
|
ADDRESS = enum.auto()
|
||||||
""" The address layer contains all places relavant for addresses:
|
""" The address layer contains all places relevant for addresses:
|
||||||
fully qualified addresses with a house number (or a house name equivalent,
|
fully qualified addresses with a house number (or a house name equivalent,
|
||||||
for some addresses) and places that can be part of an address like
|
for some addresses) and places that can be part of an address like
|
||||||
roads, cities, states.
|
roads, cities, states.
|
||||||
@ -415,7 +415,7 @@ class LookupDetails:
|
|||||||
more the geometry gets simplified.
|
more the geometry gets simplified.
|
||||||
"""
|
"""
|
||||||
locales: Locales = Locales()
|
locales: Locales = Locales()
|
||||||
""" Prefered languages for localization of results.
|
""" Preferred languages for localization of results.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
@ -544,7 +544,7 @@ class SearchDetails(LookupDetails):
|
|||||||
|
|
||||||
|
|
||||||
def layer_enabled(self, layer: DataLayer) -> bool:
|
def layer_enabled(self, layer: DataLayer) -> bool:
|
||||||
""" Check if the given layer has been choosen. Also returns
|
""" Check if the given layer has been chosen. Also returns
|
||||||
true when layer restriction has been disabled completely.
|
true when layer restriction has been disabled completely.
|
||||||
"""
|
"""
|
||||||
return self.layers is None or bool(self.layers & layer)
|
return self.layers is None or bool(self.layers & layer)
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
# Copyright (C) 2023 by the Nominatim developer community.
|
# Copyright (C) 2023 by the Nominatim developer community.
|
||||||
# For a full list of authors see the git log.
|
# For a full list of authors see the git log.
|
||||||
"""
|
"""
|
||||||
Hard-coded information about tag catagories.
|
Hard-coded information about tag categories.
|
||||||
|
|
||||||
These tables have been copied verbatim from the old PHP code. For future
|
These tables have been copied verbatim from the old PHP code. For future
|
||||||
version a more flexible formatting is required.
|
version a more flexible formatting is required.
|
||||||
@ -44,7 +44,7 @@ def get_label_tag(category: Tuple[str, str], extratags: Optional[Mapping[str, st
|
|||||||
def bbox_from_result(result: Union[napi.ReverseResult, napi.SearchResult]) -> napi.Bbox:
|
def bbox_from_result(result: Union[napi.ReverseResult, napi.SearchResult]) -> napi.Bbox:
|
||||||
""" Compute a bounding box for the result. For ways and relations
|
""" Compute a bounding box for the result. For ways and relations
|
||||||
a given boundingbox is used. For all other object, a box is computed
|
a given boundingbox is used. For all other object, a box is computed
|
||||||
around the centroid according to dimensions dereived from the
|
around the centroid according to dimensions derived from the
|
||||||
search rank.
|
search rank.
|
||||||
"""
|
"""
|
||||||
if (result.osm_object and result.osm_object[0] == 'N') or result.bbox is None:
|
if (result.osm_object and result.osm_object[0] == 'N') or result.bbox is None:
|
||||||
|
@ -155,7 +155,7 @@ COORD_REGEX = [re.compile(r'(?:(?P<pre>.*?)\s+)??' + r + r'(?:\s+(?P<post>.*))?'
|
|||||||
)]
|
)]
|
||||||
|
|
||||||
def extract_coords_from_query(query: str) -> Tuple[str, Optional[float], Optional[float]]:
|
def extract_coords_from_query(query: str) -> Tuple[str, Optional[float], Optional[float]]:
|
||||||
""" Look for something that is formated like a coordinate at the
|
""" Look for something that is formatted like a coordinate at the
|
||||||
beginning or end of the query. If found, extract the coordinate and
|
beginning or end of the query. If found, extract the coordinate and
|
||||||
return the remaining query (or the empty string if the query
|
return the remaining query (or the empty string if the query
|
||||||
consisted of nothing but a coordinate).
|
consisted of nothing but a coordinate).
|
||||||
|
@ -240,7 +240,7 @@ class ASGIAdaptor(abc.ABC):
|
|||||||
|
|
||||||
|
|
||||||
def parse_geometry_details(self, fmt: str) -> Dict[str, Any]:
|
def parse_geometry_details(self, fmt: str) -> Dict[str, Any]:
|
||||||
""" Create details strucutre from the supplied geometry parameters.
|
""" Create details structure from the supplied geometry parameters.
|
||||||
"""
|
"""
|
||||||
numgeoms = 0
|
numgeoms = 0
|
||||||
output = napi.GeometryFormat.NONE
|
output = napi.GeometryFormat.NONE
|
||||||
@ -531,7 +531,7 @@ async def deletable_endpoint(api: napi.NominatimAPIAsync, params: ASGIAdaptor) -
|
|||||||
async def polygons_endpoint(api: napi.NominatimAPIAsync, params: ASGIAdaptor) -> Any:
|
async def polygons_endpoint(api: napi.NominatimAPIAsync, params: ASGIAdaptor) -> Any:
|
||||||
""" Server glue for /polygons endpoint.
|
""" Server glue for /polygons endpoint.
|
||||||
This is a special endpoint that shows polygons that have changed
|
This is a special endpoint that shows polygons that have changed
|
||||||
thier size but are kept in the Nominatim database with their
|
their size but are kept in the Nominatim database with their
|
||||||
old area to minimize disruption.
|
old area to minimize disruption.
|
||||||
"""
|
"""
|
||||||
fmt = params.parse_format(RawDataList, 'json')
|
fmt = params.parse_format(RawDataList, 'json')
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
# Copyright (C) 2023 by the Nominatim developer community.
|
# Copyright (C) 2023 by the Nominatim developer community.
|
||||||
# For a full list of authors see the git log.
|
# For a full list of authors see the git log.
|
||||||
"""
|
"""
|
||||||
Import the base libary to use with asynchronous SQLAlchemy.
|
Import the base library to use with asynchronous SQLAlchemy.
|
||||||
"""
|
"""
|
||||||
# pylint: disable=invalid-name
|
# pylint: disable=invalid-name
|
||||||
|
|
||||||
|
@ -115,7 +115,7 @@ class SQLPreprocessor:
|
|||||||
|
|
||||||
def run_parallel_sql_file(self, dsn: str, name: str, num_threads: int = 1,
|
def run_parallel_sql_file(self, dsn: str, name: str, num_threads: int = 1,
|
||||||
**kwargs: Any) -> None:
|
**kwargs: Any) -> None:
|
||||||
""" Execure the given SQL files using parallel asynchronous connections.
|
""" Execute the given SQL files using parallel asynchronous connections.
|
||||||
The keyword arguments may supply additional parameters for
|
The keyword arguments may supply additional parameters for
|
||||||
preprocessing.
|
preprocessing.
|
||||||
|
|
||||||
|
@ -26,7 +26,7 @@ def weigh_search(search_vector: Optional[str], rankings: str, default: float) ->
|
|||||||
|
|
||||||
class ArrayIntersectFuzzy:
|
class ArrayIntersectFuzzy:
|
||||||
""" Compute the array of common elements of all input integer arrays.
|
""" Compute the array of common elements of all input integer arrays.
|
||||||
Very large input paramenters may be ignored to speed up
|
Very large input parameters may be ignored to speed up
|
||||||
computation. Therefore, the result is a superset of common elements.
|
computation. Therefore, the result is a superset of common elements.
|
||||||
|
|
||||||
Input and output arrays are given as comma-separated lists.
|
Input and output arrays are given as comma-separated lists.
|
||||||
|
@ -128,7 +128,7 @@ class FileLoggingMiddleware:
|
|||||||
resource: Optional[EndpointWrapper],
|
resource: Optional[EndpointWrapper],
|
||||||
req_succeeded: bool) -> None:
|
req_succeeded: bool) -> None:
|
||||||
""" Callback after requests writes to the logfile. It only
|
""" Callback after requests writes to the logfile. It only
|
||||||
writes logs for sucessful requests for search, reverse and lookup.
|
writes logs for successful requests for search, reverse and lookup.
|
||||||
"""
|
"""
|
||||||
if not req_succeeded or resource is None or resp.status != 200\
|
if not req_succeeded or resource is None or resp.status != 200\
|
||||||
or resource.name not in ('reverse', 'search', 'lookup', 'details'):
|
or resource.name not in ('reverse', 'search', 'lookup', 'details'):
|
||||||
|
@ -286,7 +286,7 @@ class ICUTokenizer(AbstractTokenizer):
|
|||||||
|
|
||||||
|
|
||||||
def _create_lookup_indices(self, config: Configuration, table_name: str) -> None:
|
def _create_lookup_indices(self, config: Configuration, table_name: str) -> None:
|
||||||
""" Create addtional indexes used when running the API.
|
""" Create additional indexes used when running the API.
|
||||||
"""
|
"""
|
||||||
with connect(self.dsn) as conn:
|
with connect(self.dsn) as conn:
|
||||||
sqlp = SQLPreprocessor(conn, config)
|
sqlp = SQLPreprocessor(conn, config)
|
||||||
|
@ -60,5 +60,5 @@ class SanitizerHandler(Protocol):
|
|||||||
|
|
||||||
Return:
|
Return:
|
||||||
The result must be a callable that takes a place description
|
The result must be a callable that takes a place description
|
||||||
and transforms name and address as reuqired.
|
and transforms name and address as required.
|
||||||
"""
|
"""
|
||||||
|
@ -127,7 +127,7 @@ def _get_indexes(conn: Connection) -> List[str]:
|
|||||||
|
|
||||||
# CHECK FUNCTIONS
|
# CHECK FUNCTIONS
|
||||||
#
|
#
|
||||||
# Functions are exectured in the order they appear here.
|
# Functions are executed in the order they appear here.
|
||||||
|
|
||||||
@_check(hint="""\
|
@_check(hint="""\
|
||||||
{error}
|
{error}
|
||||||
|
@ -78,7 +78,7 @@ def from_file_find_line_portion(
|
|||||||
filename: str, start: str, sep: str, fieldnum: int = 1
|
filename: str, start: str, sep: str, fieldnum: int = 1
|
||||||
) -> Optional[str]:
|
) -> Optional[str]:
|
||||||
"""open filename, finds the line starting with the 'start' string.
|
"""open filename, finds the line starting with the 'start' string.
|
||||||
Splits the line using seperator and returns a "fieldnum" from the split."""
|
Splits the line using separator and returns a "fieldnum" from the split."""
|
||||||
with open(filename, encoding='utf8') as file:
|
with open(filename, encoding='utf8') as file:
|
||||||
result = ""
|
result = ""
|
||||||
for line in file:
|
for line in file:
|
||||||
|
@ -120,7 +120,7 @@ Feature: Simple Tests
|
|||||||
| querystring | pub |
|
| querystring | pub |
|
||||||
| viewbox | 12,33,77,45.13 |
|
| viewbox | 12,33,77,45.13 |
|
||||||
|
|
||||||
Scenario: Empty XML search with exluded place ids
|
Scenario: Empty XML search with excluded place ids
|
||||||
When sending xml search query "jghrleoxsbwjer"
|
When sending xml search query "jghrleoxsbwjer"
|
||||||
| exclude_place_ids |
|
| exclude_place_ids |
|
||||||
| 123,76,342565 |
|
| 123,76,342565 |
|
||||||
@ -128,7 +128,7 @@ Feature: Simple Tests
|
|||||||
| attr | value |
|
| attr | value |
|
||||||
| exclude_place_ids | 123,76,342565 |
|
| exclude_place_ids | 123,76,342565 |
|
||||||
|
|
||||||
Scenario: Empty XML search with bad exluded place ids
|
Scenario: Empty XML search with bad excluded place ids
|
||||||
When sending xml search query "jghrleoxsbwjer"
|
When sending xml search query "jghrleoxsbwjer"
|
||||||
| exclude_place_ids |
|
| exclude_place_ids |
|
||||||
| , |
|
| , |
|
||||||
|
@ -56,7 +56,7 @@ Feature: Address computation
|
|||||||
| N1 | R1 | True |
|
| N1 | R1 | True |
|
||||||
| N1 | R2 | True |
|
| N1 | R2 | True |
|
||||||
|
|
||||||
Scenario: with boundaries of same rank the one with the closer centroid is prefered
|
Scenario: with boundaries of same rank the one with the closer centroid is preferred
|
||||||
Given the grid
|
Given the grid
|
||||||
| 1 | | | 3 | | 5 |
|
| 1 | | | 3 | | 5 |
|
||||||
| | 9 | | | | |
|
| | 9 | | | | |
|
||||||
|
@ -21,7 +21,7 @@ class GeometryFactory:
|
|||||||
The function understands the following formats:
|
The function understands the following formats:
|
||||||
|
|
||||||
country:<country code>
|
country:<country code>
|
||||||
Point geoemtry guaranteed to be in the given country
|
Point geometry guaranteed to be in the given country
|
||||||
<P>
|
<P>
|
||||||
Point geometry
|
Point geometry
|
||||||
<P>,...,<P>
|
<P>,...,<P>
|
||||||
@ -50,7 +50,7 @@ class GeometryFactory:
|
|||||||
|
|
||||||
def mk_wkt_point(self, point):
|
def mk_wkt_point(self, point):
|
||||||
""" Parse a point description.
|
""" Parse a point description.
|
||||||
The point may either consist of 'x y' cooordinates or a number
|
The point may either consist of 'x y' coordinates or a number
|
||||||
that refers to a grid setup.
|
that refers to a grid setup.
|
||||||
"""
|
"""
|
||||||
geom = point.strip()
|
geom = point.strip()
|
||||||
|
@ -270,8 +270,8 @@ class NominatimEnvironment:
|
|||||||
self.db_drop_database(self.test_db)
|
self.db_drop_database(self.test_db)
|
||||||
|
|
||||||
def _reuse_or_drop_db(self, name):
|
def _reuse_or_drop_db(self, name):
|
||||||
""" Check for the existance of the given DB. If reuse is enabled,
|
""" Check for the existence of the given DB. If reuse is enabled,
|
||||||
then the function checks for existance and returns True if the
|
then the function checks for existnce and returns True if the
|
||||||
database is already there. Otherwise an existing database is
|
database is already there. Otherwise an existing database is
|
||||||
dropped and always false returned.
|
dropped and always false returned.
|
||||||
"""
|
"""
|
||||||
|
@ -164,7 +164,7 @@ def delete_places(context, oids):
|
|||||||
def check_place_contents(context, table, exact):
|
def check_place_contents(context, table, exact):
|
||||||
""" Check contents of place/placex tables. Each row represents a table row
|
""" Check contents of place/placex tables. Each row represents a table row
|
||||||
and all data must match. Data not present in the expected table, may
|
and all data must match. Data not present in the expected table, may
|
||||||
be arbitry. The rows are identified via the 'object' column which must
|
be arbitrary. The rows are identified via the 'object' column which must
|
||||||
have an identifier of the form '<NRW><osm id>[:<class>]'. When multiple
|
have an identifier of the form '<NRW><osm id>[:<class>]'. When multiple
|
||||||
rows match (for example because 'class' was left out and there are
|
rows match (for example because 'class' was left out and there are
|
||||||
multiple entries for the given OSM object) then all must match. All
|
multiple entries for the given OSM object) then all must match. All
|
||||||
@ -211,7 +211,7 @@ def check_place_has_entry(context, table, oid):
|
|||||||
def check_search_name_contents(context, exclude):
|
def check_search_name_contents(context, exclude):
|
||||||
""" Check contents of place/placex tables. Each row represents a table row
|
""" Check contents of place/placex tables. Each row represents a table row
|
||||||
and all data must match. Data not present in the expected table, may
|
and all data must match. Data not present in the expected table, may
|
||||||
be arbitry. The rows are identified via the 'object' column which must
|
be arbitrary. The rows are identified via the 'object' column which must
|
||||||
have an identifier of the form '<NRW><osm id>[:<class>]'. All
|
have an identifier of the form '<NRW><osm id>[:<class>]'. All
|
||||||
expected rows are expected to be present with at least one database row.
|
expected rows are expected to be present with at least one database row.
|
||||||
"""
|
"""
|
||||||
@ -260,7 +260,7 @@ def check_search_name_has_entry(context, oid):
|
|||||||
def check_location_postcode(context):
|
def check_location_postcode(context):
|
||||||
""" Check full contents for location_postcode table. Each row represents a table row
|
""" Check full contents for location_postcode table. Each row represents a table row
|
||||||
and all data must match. Data not present in the expected table, may
|
and all data must match. Data not present in the expected table, may
|
||||||
be arbitry. The rows are identified via 'country' and 'postcode' columns.
|
be arbitrary. The rows are identified via 'country' and 'postcode' columns.
|
||||||
All rows must be present as excepted and there must not be additional
|
All rows must be present as excepted and there must not be additional
|
||||||
rows.
|
rows.
|
||||||
"""
|
"""
|
||||||
@ -317,7 +317,7 @@ def check_word_table_for_postcodes(context, exclude, postcodes):
|
|||||||
def check_place_addressline(context):
|
def check_place_addressline(context):
|
||||||
""" Check the contents of the place_addressline table. Each row represents
|
""" Check the contents of the place_addressline table. Each row represents
|
||||||
a table row and all data must match. Data not present in the expected
|
a table row and all data must match. Data not present in the expected
|
||||||
table, may be arbitry. The rows are identified via the 'object' column,
|
table, may be arbitrary. The rows are identified via the 'object' column,
|
||||||
representing the addressee and the 'address' column, representing the
|
representing the addressee and the 'address' column, representing the
|
||||||
address item.
|
address item.
|
||||||
"""
|
"""
|
||||||
@ -384,7 +384,7 @@ def check_location_property_osmline(context, oid, neg):
|
|||||||
def check_place_contents(context, exact):
|
def check_place_contents(context, exact):
|
||||||
""" Check contents of the interpolation table. Each row represents a table row
|
""" Check contents of the interpolation table. Each row represents a table row
|
||||||
and all data must match. Data not present in the expected table, may
|
and all data must match. Data not present in the expected table, may
|
||||||
be arbitry. The rows are identified via the 'object' column which must
|
be arbitrary. The rows are identified via the 'object' column which must
|
||||||
have an identifier of the form '<osm id>[:<startnumber>]'. When multiple
|
have an identifier of the form '<osm id>[:<startnumber>]'. When multiple
|
||||||
rows match (for example because 'startnumber' was left out and there are
|
rows match (for example because 'startnumber' was left out and there are
|
||||||
multiple entries for the given OSM object) then all must match. All
|
multiple entries for the given OSM object) then all must match. All
|
||||||
|
@ -75,7 +75,7 @@ def define_node_grid(context, grid_step, origin):
|
|||||||
# TODO coordinate
|
# TODO coordinate
|
||||||
coords = origin.split(',')
|
coords = origin.split(',')
|
||||||
if len(coords) != 2:
|
if len(coords) != 2:
|
||||||
raise RuntimeError('Grid origin expects orgin with x,y coordinates.')
|
raise RuntimeError('Grid origin expects origin with x,y coordinates.')
|
||||||
origin = (float(coords[0]), float(coords[1]))
|
origin = (float(coords[0]), float(coords[1]))
|
||||||
elif origin in ALIASES:
|
elif origin in ALIASES:
|
||||||
origin = ALIASES[origin]
|
origin = ALIASES[origin]
|
||||||
|
@ -23,7 +23,7 @@ API_OPTIONS = {'search'}
|
|||||||
|
|
||||||
@pytest.fixture(autouse=True)
|
@pytest.fixture(autouse=True)
|
||||||
def setup_icu_tokenizer(apiobj):
|
def setup_icu_tokenizer(apiobj):
|
||||||
""" Setup the propoerties needed for using the ICU tokenizer.
|
""" Setup the properties needed for using the ICU tokenizer.
|
||||||
"""
|
"""
|
||||||
apiobj.add_data('properties',
|
apiobj.add_data('properties',
|
||||||
[{'property': 'tokenizer', 'value': 'icu'},
|
[{'property': 'tokenizer', 'value': 'icu'},
|
||||||
|
@ -8,7 +8,7 @@
|
|||||||
Tests for command line interface wrapper.
|
Tests for command line interface wrapper.
|
||||||
|
|
||||||
These tests just check that the various command line parameters route to the
|
These tests just check that the various command line parameters route to the
|
||||||
correct functionionality. They use a lot of monkeypatching to avoid executing
|
correct functionality. They use a lot of monkeypatching to avoid executing
|
||||||
the actual functions.
|
the actual functions.
|
||||||
"""
|
"""
|
||||||
import importlib
|
import importlib
|
||||||
|
@ -8,7 +8,7 @@
|
|||||||
Test for the command line interface wrapper admin subcommand.
|
Test for the command line interface wrapper admin subcommand.
|
||||||
|
|
||||||
These tests just check that the various command line parameters route to the
|
These tests just check that the various command line parameters route to the
|
||||||
correct functionionality. They use a lot of monkeypatching to avoid executing
|
correct functionality. They use a lot of monkeypatching to avoid executing
|
||||||
the actual functions.
|
the actual functions.
|
||||||
"""
|
"""
|
||||||
import pytest
|
import pytest
|
||||||
|
Loading…
Reference in New Issue
Block a user