* Exploring identical spend aggregation.
* Attempt to address review.
* Revert this.
* Explore spend-based processing instead of the bundle-based one.
* Attempt to address reviews.
* Update against chia_rs and add some improvements.
* Leverage the fact that we're in a function now to perform early exits.
* Explore propagating exceptions.
* Relax the exception handling.
* Add unit tests.
* Add some comments and perform some renames for more readability.
* Refactor tests and split them by scenario.
* Take Arvid's suggestion to simplify the check.
* Improve test readability.
* Make the set explicit instead of computing it with union().
* Run in MEMPOOL_MODE for cost and additions.
* Use int_from_bytes() instead of int.from_bytes().
* Add more unit tests for run_for_cost_and_additions (to cover coin amount 0 as well as a negative amount).
* Don't use as_python() for created coins.
* Account for the cost of create coin conditions and extract additions from the NPCResult.
* Rely on NPCResult for the maximum allowed cost.
* Stop looking for the relevant spend at the first occurrence (then process its created coins).
* Add a unit test for spending a coin in different ways then finding it spent in new peak.
This makes sure all mempool items that would spend that coin get removed.
* Parameterize the test for spending a coin in different ways then finding it spent in new peak, to account for both the optimized and the reorg code paths.
* Keep track of coin spends cost and additions in mempool items in case they won't make it into the block, so we don't rerun that again next time.
* Extend replace by fee tests with eligibility scenarios.
* Refactor find_duplicate_spends into get_deduplication_info and process_mempool_items into create_bundle_from_mempool_items.
Slightly refactor check_removals and create_bundle_from_mempool.
* Cast to uint64 in the return statement.
* Adapt to the recent addition of seq.
* Add unit tests for mempool items' bundle_coin_spends.
* Leverage the recently added make_test_conds when creating this npc_result.
* Pass in bundle_coin_spends and max_cost instead of InternalMempoolItem so that it remains internal.
* Add a test for the scenario where we pick a sub-optimal aggregation by deduplicating on solution A, seen in a relatively higher FPC item, then receiving better cost saving items on solution B after that one.
* Add some end-to-end tests for identical spend aggregation.
* Add a comment documenting why eligible_coin_spends map is ephemeral.
* Make conflicting_items a list instead of a set as we're only iterating over it, we're not leveraging set ownership.
This simplifies using can_replace() and also prepares ground to simplify check_removals() next.
* Optimize check_removals by checking for mempool conflicts all at once.
Use "item" instead of "spend" in mempool's items functions names.
With this we're consistent with mempool's internal items map (_items) and _row_to_item as well as the return types.
* remove unused run_generator_mempool() function
* move run_generator_unsafe() into the tests, which is the only place it's used
* remove (somewhat meaningless) setup_generator_args() and create_generator_args()
* remove unused GENERATOR_MOD in mempool_check_conditions.py
* remove redundant get_generator() function
* transition analyze-chain.py to use run_block_generator() and drop dependency on GENERATOR_MOD
* fixup type hints in test_rom.py
* fixup type hints in test_compression.py
* fixup type hints in test_generator_types.py
limit the mempool size used for transactions with short expiration times. This makes it so transactions that expire in the next 15 minutes may not take more than one block worth of cost in the mempool
<!-- Merging Requirements:
- Please give your PR a title that is release-note friendly
- In order to be merged, you must add the most appropriate category
Label (Added, Changed, Fixed) to your PR
-->
<!-- Explain why this is an improvement (Does this add missing
functionality, improve performance, or reduce complexity?) -->
### Purpose:
Please weigh in on whether this should be applied to the release or
retargeted to main.
The `finally:` clause will already result in `None` being put into the
queue. I expect that the other `None` pushed before the `return` is
unnecessarily doubling this up. I don't know if the double `None` has
any negative effect.
<!-- Does this PR introduce a breaking change? -->
### Current Behavior:
### New Behavior:
<!-- As we aim for complete code coverage, please include details
regarding unit, and regression tests -->
### Testing Notes:
<!-- Attach any visual examples, or supporting evidence (attach any
.gif/video/console output below) -->
* optimize at-capacity check in mempool.add_to_pool() by only computing the total cost once and remove all items in a single call
* use sqlite logic
* add test for Mempool.add_to_pool() when it's full
* prepare mempool spends_by_feerate test for more test cases, move it to test_mempool.py (since it's testing the Mempool class)
* use insertion order as tie-breaker in mempool
* Fix the mempool fee per cost calculation.
It's currently performing integer division instead of float division, resulting in incorrect sorting of mempool items by fee rate. Items with x.y FPC get all treated as items of x.0 FPC.
* Add unit tests for mempool's spends_by_feerate.
* enable soft-fork2
* add blockchain (consensus) test for time-lock conditions (non-ephemeral spend)
* introduce new soft-fork rule to compare ASSERT_SECONDS_* conditions against the previous transaction block's timestamp (to be consistent with ASSERT_HEIGHT_* conditions)
* bump chia_rs. This updates the mempool rules to disallow relative height- and time conditions on ephemeral coin spends
* implement assert_before in mempool_check_time_locks. Extend ephemeral coin test in blockchain with assert_before conditions
* implement support for assert_before conditions in compute_assert_height()
* support assert-before in mempool
* add timelock rule
* address review comments
* mempool min fee increase is a constant
* make can_replace() a free function rather than a member of mempool. It doesn't need to be a member, and a free function is easier to test
* simplify can_replace by passing in MempoolItem
* slightly simplify handling of conflicting mempool items in mempool_manager, to avoid double lookups
* simplify can_replace() by just passing removal_names instead of the whole dict
* add unit test for can_replace()
* fix bug in make_test_conds() test utility
* bump chia_rs to 0.2.4, which preserves assert_seconds_relative 0 in parsing conditions. This allows for the 1.8.0 soft-fork to make the existing time-lock conditions stricter, > instead of >=. This is to match the existing ASSERT_HEIGHT_RELATIVE, which already is >
* fixup separating ENABLE_ASSERT_BEFORE from MEMPOOL_MODE
* Use a low value for SOFT_FORK2_HEIGHT during tests and cover the case before soft-fork2
---------
Co-authored-by: Adam Kelly <338792+aqk@users.noreply.github.com>
* bump chia_rs to version 0.2.3
* add new error codes for assert_my_birth_*
* add new condition codes for ASSERT_MY_BIRTH_*
* add logic for ASSERT_MY_BIRTH_* conditions
* implement Mempool using an in-memory sqlite database
* remove_from_pool with empty list is a no-op
* the order is not important in get_coin_records_by_puzzle_hash() or get_block_spends()
* use format string in log statement
* make MempoolItem not streamable, to improve performance of creating objects
* use shorter names in test-ephemeral-coin test parameters
* use shorter names in TestConditions to make the test easier to read
* user shorter names in test_blockchain test_ephemeral_timelock, to make the test easier to read
* extend test_ephemeral_timelock to cover soft-fork 2
* extend test_conditions to cover soft-fork 2
* full_node: Rename `FullNode.resond_block` to `FullNode.add_bock`
The naming `respond_block` is confusing here imo, we have a full node
API method called `respond_block` but here in the `FullNode` class this
method is there to try adding a block to the full nodes's chain so i
think `add_block` is the better choice?
Also change the `respond_block: full_node_protocol.RespondBlock`
parameter to `block: FullBlock` which lets us get rid of many
`full_node_protocol.RespondBlock` wrappings.
* `FullNode.receive_block_batch` -> `FullNode.add_block_batch`
* `receive_unfinished_block` -> `add_unfinished_block` + change parameter
* `FullNode.respond_transaction` -> `FullNode.add_transaction`
* `FullNode.respond_end_of_sub_slot` -> `FullNode.add_end_of_sub_slot`
* `FullNode.respond_compact_vdf` -> `FullNode.add_compact_vdf`
* `respond_compact_proof_of_time` -> `add_compact_proof_of_time`
* `respond_transaction_semaphore` -> `add_transaction_semaphore`
* Introduce BlockRecordProtocol as a subset of BlockRecord that the mempool manager uses for peak.
* Create BenchBlockRecord and use it for benchmarks/mempool.py.
* PR #14611 didn't land yet (keep 3.7 support for now).
* We don't need this guidance anymore.
* fix test asserts to not require the same object, just the same value
* make Mempool's implementation private and give it a public interface
* fixup test that used to count fee *levels* but now count transactions