* simplify BlockStore, now that we only support v2
* simplify CoinStore, now that we only support v2
* add test for BlockStore.get_peak()
* remove left-over database v1 support in test_hint_store
* extend test for BlockStore
* remove residual database v2 logic from test_block_height_map
* don't run tests with database schema v1
* remove support for database schema v1 from BlockStore
* remove support for database schema v1 from CoinStore
* remove support for database schema v1 from HintStore
* remove support for v1 BlockStore schema from blockchain reorg logic
* remove support for database schema v1 from BlockHeightMap
* run block store tests both with and without the cache
* add test with empty blockchain for BlockHeightMap
* fix typo
* use solution_generator() from rust
* farm blocks with backrefs
* update BlockTools to generate block generators serialized with backrefs
* support deserializing programs with backrefs
* update passed_plot_filter() to take the filter size rather than consensus constants. This allows the filter size to change by block height. make verify_and_get_quality_string() take either the height or filter size
* Add some filter_prefix_bits tests on test blocks.
* Add some filter_prefix_bits tests on simulated farmer and harvester.
* Cache filter prefix bits by challenge chain signage point hash and use that for the the lookups. This allows us to perform plot filter validation.
* Add more cases to verify_and_get_quality_string() unit tests.
* Add some tests for Farmer's respond_signatures.
* Apply Kevin's suggestion to simplify the check for passing plot filter.
* Apply Kevin's suggestions to simplify some test checks and fix a couple typos.
* Apply Kevin's suggestion to send peak height instead of filter prefix bits as part of NewSignagePoint.
* Remove no longer needed filter prefix bit related logic and make height non optional in verify_and_get_quality_string().
---------
Co-authored-by: Amine Khaldi <amine.khaldi@reactos.org>
* add HARD_FORK_2_0 consensus_mode to run tests under hard-fork consensus rules
* hook up soft-fork and hard-fork changes in CLVM
* fix AGG_SIG_* garbage tests for the 2.0 hard-fork
* add test for unknown conditions with cost, which is a hard-fork feature
* add mempool tests for unknown conditions with cost
* add tests for the SOFTFORK condition
* fix benchmark to take LIMIT_ANNOUNCES into account
* server: Introduce `ApiProtocol`
* genericize (#5)
* `ApiProtocol.api_ready` -> `ApiProtocol.ready()`
* Add `ApiProtocol.log` and give APIs separate loggers
* Fix `CrawlerAPI`
* Drop some unrelated removals
* Fix some of the generic hinting
* Revert some changes in `timelord_api.py`
* Fix `CawlerAPI` readiness
* Fix hinting
* Get some `CrawlerAPI` coverage
---------
Co-authored-by: Kyle Altendorf <sda@fstab.net>
* explore a simplification of the blockchain priority lock queue
* add some tests
* correct task tracking
* use time.perf_counter for better resolution on windows
* just count an integer for request order retention
* stop using time in the (new) tests as well
* add tests and a couple no covers
* less existing test refactoring
* use a sync PriorityQueue
* switch to deques
* address bugs and simplify priority to deque mapping
* remove unused attribute ._priority_type
* make LockQueu.create() not async
* explain the active element check on wait cancellation
* drop LockQueue._process()
* import final from typing_extensions
* rename LockQueue to PriorityMutex
* remove test from mypy exclusions
* clean up straggling lock references
* ignore test failure case line coverage
* add a monkeypatch test ;[
* remove queued callback feature
* remove todos
* remove softfork-logic for 1.7 softfork (which has already activated)
* add new constants for soft-fork3, hard-fork and the other plot-filter adjustments
* Exploring identical spend aggregation.
* Attempt to address review.
* Revert this.
* Explore spend-based processing instead of the bundle-based one.
* Attempt to address reviews.
* Update against chia_rs and add some improvements.
* Leverage the fact that we're in a function now to perform early exits.
* Explore propagating exceptions.
* Relax the exception handling.
* Add unit tests.
* Add some comments and perform some renames for more readability.
* Refactor tests and split them by scenario.
* Take Arvid's suggestion to simplify the check.
* Improve test readability.
* Make the set explicit instead of computing it with union().
* Run in MEMPOOL_MODE for cost and additions.
* Use int_from_bytes() instead of int.from_bytes().
* Add more unit tests for run_for_cost_and_additions (to cover coin amount 0 as well as a negative amount).
* Don't use as_python() for created coins.
* Account for the cost of create coin conditions and extract additions from the NPCResult.
* Rely on NPCResult for the maximum allowed cost.
* Stop looking for the relevant spend at the first occurrence (then process its created coins).
* Add a unit test for spending a coin in different ways then finding it spent in new peak.
This makes sure all mempool items that would spend that coin get removed.
* Parameterize the test for spending a coin in different ways then finding it spent in new peak, to account for both the optimized and the reorg code paths.
* Keep track of coin spends cost and additions in mempool items in case they won't make it into the block, so we don't rerun that again next time.
* Extend replace by fee tests with eligibility scenarios.
* Refactor find_duplicate_spends into get_deduplication_info and process_mempool_items into create_bundle_from_mempool_items.
Slightly refactor check_removals and create_bundle_from_mempool.
* Cast to uint64 in the return statement.
* Adapt to the recent addition of seq.
* Add unit tests for mempool items' bundle_coin_spends.
* Leverage the recently added make_test_conds when creating this npc_result.
* Pass in bundle_coin_spends and max_cost instead of InternalMempoolItem so that it remains internal.
* Add a test for the scenario where we pick a sub-optimal aggregation by deduplicating on solution A, seen in a relatively higher FPC item, then receiving better cost saving items on solution B after that one.
* Add some end-to-end tests for identical spend aggregation.
* Add a comment documenting why eligible_coin_spends map is ephemeral.
* Make conflicting_items a list instead of a set as we're only iterating over it, we're not leveraging set ownership.
This simplifies using can_replace() and also prepares ground to simplify check_removals() next.
* Optimize check_removals by checking for mempool conflicts all at once.
Use "item" instead of "spend" in mempool's items functions names.
With this we're consistent with mempool's internal items map (_items) and _row_to_item as well as the return types.
* remove unused run_generator_mempool() function
* move run_generator_unsafe() into the tests, which is the only place it's used
* remove (somewhat meaningless) setup_generator_args() and create_generator_args()
* remove unused GENERATOR_MOD in mempool_check_conditions.py
* remove redundant get_generator() function
* transition analyze-chain.py to use run_block_generator() and drop dependency on GENERATOR_MOD
* fixup type hints in test_rom.py
* fixup type hints in test_compression.py
* fixup type hints in test_generator_types.py