Pools mainnet (#7047)

* added clarifying comments

* WIP test

* added WIP test

* Refine genesis challenge. Remove unnecessary pool_puzzle function

* Sign spend. Remove create_member_spend. Rename state transition function to create_travel_spend

* Rename create_member_spend to create_travel_spend

* Add singleton id logging

* Enhance logging for debugging

* renaming

* rephrase inside the puzzle

* fixed signing and added some support functions

* Fix issue with announcement

* Progress spending the singleton

* Fix arguments to pool_state_to_inner_puzzle call

* Fix arguments to pool_state_to_inner_puzzle

* Improve error message when wallet is not running

* Remove misleading message about missing wallet process, when problem is the farmer by making poolnft command error out earlier

* Fix parent coin info bug

* Multiple state transitions in one block

* Lint

* Remove assert

* Fix incorrect p2_singleton_ph calculation (thanks nil00)

* Update waiting room puzzle to accept genesis_challenge

* Update calls to create_waiting

* Go to waiting state from committed state

* Augment debug_spend_bundle

* fix 2 bugs in wallet

* Fix lint

* fix bad_agg_sig bug

* Tests and lint

* remove breakpoint

* fix clvm tests for new hexs and hashs

* Fixed a bug in the coin store that was probably from merging. (#6577)

* Fixed a bug in the coin store that was probably from merging.

* The exception doesn't need to be there

* CI Lint fix

* Added lifecycle tests for pooling drivers (#6610)

* Ms.poolabsorb (#6615)

* Support for absorbing rewards in pools (untested)

* Style improvements

* More work on absorb

* Revert default root and remove log

* Revert small plots

* Use real sub slot iters

* Update types

* debug1

* Fix bugs

* fix output of agg sig log messages

* Make fewer calls to pw_status in test

* remove old comment

* logging and state management

* logging

* small bug fix & rename for accuracy

* format

* Fix types for uncurry function

* lint

* Update test to use exceptions

* Change assumptions about self-pooling in lifecycle test

* Install types for mypy

* Revert "Install types for mypy"

This reverts commit a82dcb712a.

* install types for mypy

* install types for mypy

* More keys

* Remove flags requiring interactive prompts

* Change initial spend to waiting room if self-pooling

* lint

* lint

* linting

* Refactor test

* Use correct value in log message

* update p2_singleton_or_delated_puzhash

* initial version of pool wallet with p2_singleton_or_delay

* run black formatting

* fix rebase wonkiness

* fix announcement code in p2_singleton_or_delayed

* removed redundant defaulting
standardised hexstr handling

* lint fixes

* Fixed pool lifecycle tests to current standards, but discovered tests are not validating signatures

* Signatures validate on this test now although the test still does not check it.

* Lint fix

* Fixed plotnft show and linting errors

* fixed failing farmer/harvester rpc test

* lint fix

* Commenting out some outdated tests

* Updated test coverage

* lint fix

* Some minor P2singleton improvements (#6325)

* Improve some debugging tools.

* Tidy pool clvm.

* Use `SINGLETON_STRUCT`. Remove unused `and` macro.

* Use better name `SINGLETON_MOD_HASH`.

* Finish lifecycle test suite.

* Fixing for merge with chia-blockchain/pools_delayed_puzzle (#72)

Co-authored-by: Matt Hauff <quexington@gmail.com>

* Default delay time was being set incorrectly

* Extracted get_delayed_puz_info_from_launcher_spend to driver code

* Ms.taproot plot2 (#6692)

* Start work on adding taproot to new plots

* Fix issue in block_tools

* new test-cache

* Lint

* DID fixexs

* Fix other tests

* Python black

* Fix full node store test

* Ensure block index <= 128 bits.

* fix test_pool_config test

* fix comments in pool_config and in chialisp files

* self_pool -> pool -> self_pool

* Implement leaving pools

* Fix conflicts with main via mini-rebase

* Fixup rebase mistakes

* Bring in Mariano's node discovery chagnes from pools.dev

* Fix adapters - Thanks Richard

* build tests

* Add pools tests

* Disable DID tests

* farmer|protocol: Some renaming related to the pool protocol

* farmer: Use `None` instead of `{}` and add local `pool_state`

* protocol|farmer: Introduce and use `PoolErrorCode`

* rename: `pool_payout_instructions` -> `payout_instructions`

* refactor: `AuthenticationKeyInfo` -> `authentication_key`

* refactor: Move `launcher_id` up

* rename: Some variable name changes

* rename: `points_balance` -> `points`

* format: Squash aggregation into one line

* farmer: Make `update_pool_state` public

* farmer: Print traceback if `update_pool_state` fails

* farmer: Periodically call `GET /pool_info`, add `_pool_get_pool_info`

* farmer: Add `authentication_token_timeout` to `pool_state`

Fetch it from `GET /pool_info`

* protocol|farmer: Implement support for `GET|POST|PUT /farmer`

* farmer: Make use of `GET|POST /farmer`

- To make the farmer known by the pool
- To update local balance/difficulty from the pool periodically

* farmer|protocol: Adjust `POST /partial` to match the latest spec

* farmer: Hash messages before signing

* pools: Drop unused code

* farmer: Fix aggregation of partial signatures

* farmer: support self pooling, don't pool if url==""

* wallet: return uint64 for delay time, instead of bytes

* pool: add error code for delay time too short

* farmer: cleaner logging when no connection to pool

* farmer: add harvester node id to pool protocol

* Rename method (test fix) and lint fix

* Change errors to warnings (pool communication)

* Remove pool callbacks on a reorg

* farmer: Continue earlier when no pool URL is provided

* farmer: Print method in log

* farmer: Handle exceptions for all pool endpoint calls

* farmer|protocol: Keep track of failed requests to the pool

* farmer: Fix typo which caused issue with pooling

* wallet: simplify solution_to_extra_data

* tests: Comment out DID tests which are not working yet

* Remove DID Wallet test workflows

* Return launcher_id when creating Pool Wallet

* Name p2_singleton_puzzle_hash correctly

* Improve 'test_singleton_lifecycle_fast.py'.

* Make test more robust in the face of asynchronous adversity

* Add commandline cmds for joining and leaving pools

* Fix poolnft leave params

* Remove redundant assignment brought in from main

* Remove unneeded code

* Style and parsimony

* pool_puzzles: Check was wrong, and bad naming

* format: Fix linting

* format: Remove log and rename variable

* pool-wallet: Fix self pooling with multiple pool wallets. Don't remove interested puzzle_hash

* gui: Use pools branch

* format: fix lint

* Remove ununsed code, improve initial_pool_state_from_dict

* farmer: Instantly update the config, when config file changes

* format: Speed up loading of the authentication key

* logging: less annoying logging

* Test pool NFT creation directly to pool

* Test switching pools without self-farming in between

* lint

* pooling: Use integer for protocol version (#6797)

* pooling: Use integer for protocol version

* pooling: Fix import

* Update GUI commit

* Ms.login2 (#6804)

* pooling: Login WIP

* pooling: add RPC for get_link

* dont use timeout

* pooling: rename to get_login_link

* format: remove logging

* Fix SES test

* Required cli argument

Co-authored-by: almog <almogdepaz@gmail.com>

* farmer|protocols: Rename `current_difficulty` for `POST /partial` (#6807)

* Fix to farm summary

* Use target_puzzlehash param name in RPC call

* Pool test coverage (#6782)

* Improvement in test coverage and typing

* Added an extra absorb to the pool lifecycle test (only works when merged with https://github.com/Chia-Network/chia-blockchain/pull/6733)

* Added new drivers for the p2_singleton puzzles

* Added new tests and test coverage for singletons

* organize pools testing directory

* black formatting

* black formatting in venv

* lint fix

* Update CI tests

* Fixing tests post rebase

* lint fix

* Minor readability fix

Co-authored-by: matt <matt@chia.net>

* farmer: Drop `target_puzzle_hash` from `GET /farmer` and `GET /login` (#6816)

* Allow creation of PlotNFTs in self-farming state

* gui: Fix install with more RAM (#6821)

* Allow implicit payout_address in self-pool state, improve error messages and param ergonomics

* print units in non-standard wallets correctly

* Fix farmer import

* Make syncing message in CLI more intuitive like the GUI

* Fix linting and show header hash instead of height

* gui: Update to 725071236eff8c81d5b267dc8eb69d7e03f3df8c

* Revert "Merge"

This reverts commit 23a1e688c5, reversing
changes made to a850246c6f.

* Revert "Revert "Merge""

This reverts commit 680331859f.

* Treat tx_record as Dict. Refactor tx submission

* Also add passed-in coin spends when processing new blocks in reconsider_peak

* Test utilities had moved

* Fix import of moved block_tools

* Potentially fix yaml

* Previously didn't take the right part of this change

* Add -y flag, improve commandline plotnft handling

* Fix typo

* Add -y flag to plotnft create

* pool_wallet: Restore from DB properly

* wallet: ignore bad pool configs

* Reduce memory

* pool_wallet: Add claim command

* pool_wallet: Set transaction records to confirmed

* wallet: Fix bug in transaction cache

* Formatting and remove log

* pool_wallet: CLI balance and improvements to plotnft_funcs.py

* pool_wallet: Simplify, and fix issue with double submission

* pool_wallet: Fix tests

* pool_wallet: Don't allow switching before relative lock height

* update gui

* change to 3000 mem

* Correct sense of -y flag for self-pooling

* cli: Display payout instructions for pool

* pool_wallet: Don't create massive transactions

* cli: Improvements to plotnft

* pool_wallet: Get correct pool state

* pool_wallet: Use last transaction block to prevent condition failure

* Add block height for current state

* Add outstanding unconfirmed transactions to pw_status

* Refine command line plotnft show pending transactions

* Fix tests by using the correct output from pw_status

* Try to fix windows build

* Print expected leave height

* label pool urls

* pool_wallet: Don't include pool 1.75 rewards in total

* wallet: Add RPC and CLI for deleting unconfirmed transactions for a wallet

* pool_wallet: If farming to a pool, show 0 balance in wallet

* pool_wallet: Show error message if invalid state, in CLI

* pool_wallet: Don't allow switching if there are pending unconfirmed transactions

* tests: Clean up pool test logging

* tests: Fix lint

* Changed the pool innerpuzzes (#6802)

* overload solutions for pool_innerpuz parameters

* Fix tests for reduced size puzzles

* deleted messy deprecated test

* Fix lint.

* fix bug where spend types were the wrong way around

* merge with richard's lint fix

* fix wallet bug
remove unnecessary signature
add defun-inline for clarity

* Swap to defun for absorb case
Use cons box for member innerpuz solution

* fix if statement for cons box p1

* remove unnecessary solution arg

* quick innerpuz fix to make tests pass

* Switch to key-value pairs
Undo cons box solution in pool_member inner puzzle

* fix singleton lifecycle test

* added some comments to calrify the meaning on "ps"

* lint fix

* reduce label size, search for label when reconstructing from solution

* no need to keep looping if `p` found

* lint fix

* Removed unecessary defun-inline and changed hyphens to underscores

* Changed created_coin_value_or_0 to an inline function

* Changed morph_condition to an inline function

* Added a comment for odd_cons_m113

* Rename output_odd and odd_output_found

* Add inline functions to document the lineage proof values

* Stager two rewrite

* Added an ASSER_MY_AMOUNT to p2_singleton_or_delayed

* Extract truth functionality to singleton_truths.clib

* Fix tree hashes

* Changed truths to a struct rather than a list.

* fix test_singletons
update did_innerpuz

* recompile did_innerpuz

* fix a log error

* Renamed variable and factored out code per @richardkiss

* lint fix

* switch launcher extra_data to key_value pairs

* fix parsing of new format of extra_data in launcher solution

* fix broken test for new launcher solution format

* remove bare raise

Co-authored-by: Richard Kiss <him@richardkiss.com>
Co-authored-by: Matt Hauff <quexington@gmail.com>

* Also add passed-in coin spends when processing new blocks in reconsider_peak (#6898)

Co-authored-by: Adam Kelly <aqk>

* Moved debug_spend_bundle and added it to the SpendBundle object (#6840)

* Moved debug_spend_bundle and added it to the SpendBundle object

* Remove problematic typing

* Add testnet config

* wallet: Memory would get corrupted if there was an error (#6902)

* wallet: Memory would get corrupted if there was an error

* wallet: Use block_record

* wallet: Add records in a full fork too

* wallet: remove unnecessary arguments in CC and DID

* add to cache, revert if transaction fails

Co-authored-by: Yostra <straya@chia.net>

* Improve comment

* pool_wallet: Fix driver bug

* wallet: Fix memory corruption

* gui: Update to latest

* Increase memory size

* tests: Add test for absorbing from pool

* small fix in solution_to_extra_data

* Fixed incorrect function name

* pooling: Fix EOS handling in full node

* [pools.testnet9]add post /partial and /farmer header (#6957)

* Update farmer.py

add post header

* Update farmer_api.py

add post header

* Update chia/farmer/farmer.py

Co-authored-by: dustinface <35775977+xdustinface@users.noreply.github.com>

* Update chia/farmer/farmer_api.py

Co-authored-by: dustinface <35775977+xdustinface@users.noreply.github.com>

Co-authored-by: Mariano Sorgente <3069354+mariano54@users.noreply.github.com>
Co-authored-by: dustinface <35775977+xdustinface@users.noreply.github.com>

* Fix lint and cleanup farmer.py

* farmer: Fix linting issues (#7010)

* Handle the case of incorrectly formatted PoolState data returned from inner singleton

* wallet: Resubmit transaction if not successful, rename to new_transaction_block_callback (#7008)

* Fix lint in pool_puzzles

* pooling: Fix owner private key lookup, and remove unnecessary argument

* pooling: Clear target state on `delete_unconfirmed_transactions`

* Lint

* Fix non-deterministic test

* Slight cleanup clvm driver code (#7028)

* Return None when a deserialized CLVM structure does not fit the expected format of var-value pair for singleton data

* lint

Co-authored-by: Adam Kelly <aqk>

* Revert "Add testnet config"

This reverts commit 9812442724.

Co-authored-by: matt <matt@chia.net>
Co-authored-by: Adam Kelly <aqk@aqk.im>
Co-authored-by: Mariano Sorgente <sorgente711@gmail.com>
Co-authored-by: Matt Hauff <quexington@gmail.com>
Co-authored-by: Mariano Sorgente <3069354+mariano54@users.noreply.github.com>
Co-authored-by: Adam <aqk@Adams-MacBook-Pro.local>
Co-authored-by: Adam Kelly <aqk>
Co-authored-by: Richard Kiss <him@richardkiss.com>
Co-authored-by: xdustinface <xdustinfacex@gmail.com>
Co-authored-by: almog <almogdepaz@gmail.com>
Co-authored-by: dustinface <35775977+xdustinface@users.noreply.github.com>
Co-authored-by: Earle Lowe <e.lowe@chia.net>
Co-authored-by: arvidn <arvid@libtorrent.org>
Co-authored-by: willi123yao <willi123yao@gmail.com>
Co-authored-by: arty <art.yerkes@gmail.com>
Co-authored-by: William Blanke <wjb98672@gmail.com>
Co-authored-by: matt-o-how <48453825+matt-o-how@users.noreply.github.com>
Co-authored-by: Chris Marslender <chrismarslender@gmail.com>
Co-authored-by: Yostra <straya@chia.net>
Co-authored-by: DouCrazy <43004977+lpf763827726@users.noreply.github.com>
This commit is contained in:
Adam Kelly 2021-06-29 14:21:25 -07:00 committed by GitHub
parent aba34c1ceb
commit 89f7a4b3d6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
169 changed files with 9165 additions and 780 deletions

View File

@ -135,17 +135,17 @@ jobs:
- name: Rename Artifact
run: |
ls
mv ${{ github.workspace }}/build_scripts/final_installer/Chia-${{ steps.version_number.outputs.CHIA_INSTALLER_VERSION }}.dmg ${{ github.workspace }}/build_scripts/final_installer/Chia-Catalina-${{ steps.version_number.outputs.CHIA_INSTALLER_VERSION }}.dmg
ls
mv ${{ github.workspace }}/build_scripts/final_installer/Chia-${{ steps.version_number.outputs.CHIA_INSTALLER_VERSION }}.dmg ${{ github.workspace }}/build_scripts/final_installer/Chia-Catalina-${{ steps.version_number.outputs.CHIA_INSTALLER_VERSION }}.dmg
- name: Create Checksums
run: |
ls
shasum -a 256 ${{ github.workspace }}/build_scripts/final_installer/Chia-Catalina-${{ steps.version_number.outputs.CHIA_INSTALLER_VERSION }}.dmg > ${{ github.workspace }}/build_scripts/final_installer/Chia-Catalina-${{ steps.version_number.outputs.CHIA_INSTALLER_VERSION }}.dmg.sha256
ls
shasum -a 256 ${{ github.workspace }}/build_scripts/final_installer/Chia-Catalina-${{ steps.version_number.outputs.CHIA_INSTALLER_VERSION }}.dmg > ${{ github.workspace }}/build_scripts/final_installer/Chia-Catalina-${{ steps.version_number.outputs.CHIA_INSTALLER_VERSION }}.dmg.sha256
- name: Upload to s3
run: |
aws s3 cp ${{ github.workspace }}/build_scripts/final_installer/Chia-Catalina-${{ steps.version_number.outputs.CHIA_INSTALLER_VERSION }}.dmg s3://download-chia-net/builds/
aws s3 cp ${{ github.workspace }}/build_scripts/final_installer/Chia-Catalina-${{ steps.version_number.outputs.CHIA_INSTALLER_VERSION }}.dmg s3://download-chia-net/builds/
- name: Install py3createtorrent
if: startsWith(github.ref, 'refs/tags/')

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -80,4 +80,4 @@ jobs:
- name: Test clvm code with pytest
run: |
. ./activate
./venv/bin/py.test tests/clvm/test_chialisp_deserialization.py tests/clvm/test_clvm_compilation.py tests/clvm/test_puzzles.py tests/clvm/test_serialized_program.py -s -v --durations 0 -n auto
./venv/bin/py.test tests/clvm/test_chialisp_deserialization.py tests/clvm/test_clvm_compilation.py tests/clvm/test_puzzles.py tests/clvm/test_serialized_program.py tests/clvm/test_singletons.py -s -v --durations 0 -n auto

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -0,0 +1,93 @@
name: MacOS core-daemon Tests
on:
push:
branches:
- main
tags:
- '**'
pull_request:
branches:
- '**'
jobs:
build:
name: MacOS core-daemon Tests
runs-on: ${{ matrix.os }}
timeout-minutes: 30
strategy:
fail-fast: false
max-parallel: 4
matrix:
python-version: [3.8, 3.9]
os: [macOS-latest]
steps:
- name: Cancel previous runs on the same branch
if: ${{ github.ref != 'refs/heads/main' }}
uses: styfle/cancel-workflow-action@0.9.0
with:
access_token: ${{ github.token }}
- name: Checkout Code
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Setup Python environment
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Get pip cache dir
id: pip-cache
run: |
echo "::set-output name=dir::$(pip cache dir)"
- name: Cache pip
uses: actions/cache@v2.1.6
with:
# Note that new runners may break this https://github.com/actions/cache/issues/292
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/setup.py') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Checkout test blocks and plots
uses: actions/checkout@v2
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
run: |
cd $HOME
ln -s $GITHUB_WORKSPACE/.chia
echo "$HOME/.chia"
ls -al $HOME/.chia
- name: Run install script
env:
INSTALL_PYTHON_VERSION: ${{ matrix.python-version }}
BUILD_VDF_CLIENT: "N"
run: |
brew install boost
sh install.sh
- name: Install timelord
run: |
. ./activate
sh install-timelord.sh
./vdf_bench square_asm 400000
- name: Install developer requirements
run: |
. ./activate
venv/bin/python -m pip install pytest pytest-asyncio pytest-xdist
- name: Test core-daemon code with pytest
run: |
. ./activate
./venv/bin/py.test tests/core/daemon/test_daemon.py -s -v --durations 0

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
@ -90,4 +90,4 @@ jobs:
- name: Test core-full_node code with pytest
run: |
. ./activate
./venv/bin/py.test tests/core/full_node/test_address_manager.py tests/core/full_node/test_block_store.py tests/core/full_node/test_coin_store.py tests/core/full_node/test_full_node.py tests/core/full_node/test_full_node_store.py tests/core/full_node/test_initial_freeze.py tests/core/full_node/test_mempool.py tests/core/full_node/test_mempool_performance.py tests/core/full_node/test_node_load.py tests/core/full_node/test_sync_store.py tests/core/full_node/test_transactions.py -s -v --durations 0
./venv/bin/py.test tests/core/full_node/test_address_manager.py tests/core/full_node/test_block_store.py tests/core/full_node/test_coin_store.py tests/core/full_node/test_conditions.py tests/core/full_node/test_full_node.py tests/core/full_node/test_full_node_store.py tests/core/full_node/test_initial_freeze.py tests/core/full_node/test_mempool.py tests/core/full_node/test_mempool_performance.py tests/core/full_node/test_node_load.py tests/core/full_node/test_performance.py tests/core/full_node/test_sync_store.py tests/core/full_node/test_transactions.py -s -v --durations 0

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
@ -90,4 +90,4 @@ jobs:
- name: Test core-types code with pytest
run: |
. ./activate
./venv/bin/py.test tests/core/types/test_proof_of_space.py -s -v --durations 0
./venv/bin/py.test tests/core/types/test_coin.py tests/core/types/test_proof_of_space.py -s -v --durations 0

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
@ -90,4 +90,4 @@ jobs:
- name: Test core-util code with pytest
run: |
. ./activate
./venv/bin/py.test tests/core/util/test_keychain.py tests/core/util/test_significant_bits.py tests/core/util/test_streamable.py tests/core/util/test_type_checking.py -s -v --durations 0
./venv/bin/py.test tests/core/util/test_keychain.py tests/core/util/test_lru_cache.py tests/core/util/test_significant_bits.py tests/core/util/test_streamable.py tests/core/util/test_type_checking.py -s -v --durations 0

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
@ -90,4 +90,4 @@ jobs:
- name: Test generator code with pytest
run: |
. ./activate
./venv/bin/py.test tests/generator/test_compression.py tests/generator/test_generator_types.py tests/generator/test_scan.py -s -v --durations 0
./venv/bin/py.test tests/generator/test_compression.py tests/generator/test_generator_types.py tests/generator/test_rom.py tests/generator/test_scan.py -s -v --durations 0

View File

@ -0,0 +1,93 @@
name: MacOS pools Tests
on:
push:
branches:
- main
tags:
- '**'
pull_request:
branches:
- '**'
jobs:
build:
name: MacOS pools Tests
runs-on: ${{ matrix.os }}
timeout-minutes: 30
strategy:
fail-fast: false
max-parallel: 4
matrix:
python-version: [3.8, 3.9]
os: [macOS-latest]
steps:
- name: Cancel previous runs on the same branch
if: ${{ github.ref != 'refs/heads/main' }}
uses: styfle/cancel-workflow-action@0.9.0
with:
access_token: ${{ github.token }}
- name: Checkout Code
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Setup Python environment
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Get pip cache dir
id: pip-cache
run: |
echo "::set-output name=dir::$(pip cache dir)"
- name: Cache pip
uses: actions/cache@v2.1.6
with:
# Note that new runners may break this https://github.com/actions/cache/issues/292
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/setup.py') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Checkout test blocks and plots
uses: actions/checkout@v2
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
run: |
cd $HOME
ln -s $GITHUB_WORKSPACE/.chia
echo "$HOME/.chia"
ls -al $HOME/.chia
- name: Run install script
env:
INSTALL_PYTHON_VERSION: ${{ matrix.python-version }}
BUILD_VDF_CLIENT: "N"
run: |
brew install boost
sh install.sh
- name: Install timelord
run: |
. ./activate
sh install-timelord.sh
./vdf_bench square_asm 400000
- name: Install developer requirements
run: |
. ./activate
venv/bin/python -m pip install pytest pytest-asyncio pytest-xdist
- name: Test pools code with pytest
run: |
. ./activate
./venv/bin/py.test tests/pools/test_pool_config.py tests/pools/test_pool_puzzles_lifecycle.py tests/pools/test_pool_rpc.py tests/pools/test_pool_wallet.py tests/pools/test_wallet_pool_store.py -s -v --durations 0

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -58,7 +58,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
@ -90,4 +90,4 @@ jobs:
- name: Test wallet code with pytest
run: |
. ./activate
./venv/bin/py.test tests/wallet/test_backup.py tests/wallet/test_bech32m.py tests/wallet/test_chialisp.py tests/wallet/test_puzzle_store.py tests/wallet/test_singleton.py tests/wallet/test_taproot.py tests/wallet/test_wallet.py tests/wallet/test_wallet_store.py -s -v --durations 0
./venv/bin/py.test tests/wallet/test_backup.py tests/wallet/test_bech32m.py tests/wallet/test_chialisp.py tests/wallet/test_puzzle_store.py tests/wallet/test_singleton.py tests/wallet/test_singleton_lifecycle.py tests/wallet/test_singleton_lifecycle_fast.py tests/wallet/test_taproot.py tests/wallet/test_wallet.py tests/wallet/test_wallet_interested_store.py tests/wallet/test_wallet_store.py -s -v --durations 0

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -92,4 +92,4 @@ jobs:
- name: Test clvm code with pytest
run: |
. ./activate
./venv/bin/py.test tests/clvm/test_chialisp_deserialization.py tests/clvm/test_clvm_compilation.py tests/clvm/test_puzzles.py tests/clvm/test_serialized_program.py -s -v --durations 0 -n auto
./venv/bin/py.test tests/clvm/test_chialisp_deserialization.py tests/clvm/test_clvm_compilation.py tests/clvm/test_puzzles.py tests/clvm/test_serialized_program.py tests/clvm/test_singletons.py -s -v --durations 0 -n auto

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -0,0 +1,105 @@
name: Ubuntu core-daemon Test
on:
push:
branches:
- main
tags:
- '**'
pull_request:
branches:
- '**'
jobs:
build:
name: Ubuntu core-daemon Test
runs-on: ${{ matrix.os }}
timeout-minutes: 30
strategy:
fail-fast: false
max-parallel: 4
matrix:
python-version: [3.7, 3.8, 3.9]
os: [ubuntu-latest]
steps:
- name: Cancel previous runs on the same branch
if: ${{ github.ref != 'refs/heads/main' }}
uses: styfle/cancel-workflow-action@0.9.0
with:
access_token: ${{ github.token }}
- name: Checkout Code
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Setup Python environment
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Cache npm
uses: actions/cache@v2.1.6
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Get pip cache dir
id: pip-cache
run: |
echo "::set-output name=dir::$(pip cache dir)"
- name: Cache pip
uses: actions/cache@v2.1.6
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/setup.py') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Checkout test blocks and plots
uses: actions/checkout@v2
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
run: |
cd $HOME
ln -s $GITHUB_WORKSPACE/.chia
echo "$HOME/.chia"
ls -al $HOME/.chia
- name: Install ubuntu dependencies
run: |
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python${{ matrix.python-version }}-venv python${{ matrix.python-version }}-distutils git -y
- name: Run install script
env:
INSTALL_PYTHON_VERSION: ${{ matrix.python-version }}
run: |
sh install.sh
- name: Install timelord
run: |
. ./activate
sh install-timelord.sh
./vdf_bench square_asm 400000
- name: Install developer requirements
run: |
. ./activate
venv/bin/python -m pip install pytest pytest-asyncio pytest-xdist
- name: Test core-daemon code with pytest
run: |
. ./activate
./venv/bin/py.test tests/core/daemon/test_daemon.py -s -v --durations 0

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
@ -102,7 +102,7 @@ jobs:
- name: Test core-full_node code with pytest
run: |
. ./activate
./venv/bin/py.test tests/core/full_node/test_address_manager.py tests/core/full_node/test_block_store.py tests/core/full_node/test_coin_store.py tests/core/full_node/test_full_node.py tests/core/full_node/test_full_node_store.py tests/core/full_node/test_initial_freeze.py tests/core/full_node/test_mempool.py tests/core/full_node/test_mempool_performance.py tests/core/full_node/test_node_load.py tests/core/full_node/test_sync_store.py tests/core/full_node/test_transactions.py -s -v --durations 0
./venv/bin/py.test tests/core/full_node/test_address_manager.py tests/core/full_node/test_block_store.py tests/core/full_node/test_coin_store.py tests/core/full_node/test_conditions.py tests/core/full_node/test_full_node.py tests/core/full_node/test_full_node_store.py tests/core/full_node/test_initial_freeze.py tests/core/full_node/test_mempool.py tests/core/full_node/test_mempool_performance.py tests/core/full_node/test_node_load.py tests/core/full_node/test_performance.py tests/core/full_node/test_sync_store.py tests/core/full_node/test_transactions.py -s -v --durations 0
- name: Check resource usage
run: |
sqlite3 -readonly -separator " " .pymon "select item,cpu_usage,total_time,mem_usage from TEST_METRICS order by mem_usage desc;" >metrics.out

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
@ -102,4 +102,4 @@ jobs:
- name: Test core-types code with pytest
run: |
. ./activate
./venv/bin/py.test tests/core/types/test_proof_of_space.py -s -v --durations 0
./venv/bin/py.test tests/core/types/test_coin.py tests/core/types/test_proof_of_space.py -s -v --durations 0

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
@ -102,4 +102,4 @@ jobs:
- name: Test core-util code with pytest
run: |
. ./activate
./venv/bin/py.test tests/core/util/test_keychain.py tests/core/util/test_significant_bits.py tests/core/util/test_streamable.py tests/core/util/test_type_checking.py -s -v --durations 0
./venv/bin/py.test tests/core/util/test_keychain.py tests/core/util/test_lru_cache.py tests/core/util/test_significant_bits.py tests/core/util/test_streamable.py tests/core/util/test_type_checking.py -s -v --durations 0

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
@ -102,4 +102,4 @@ jobs:
- name: Test generator code with pytest
run: |
. ./activate
./venv/bin/py.test tests/generator/test_compression.py tests/generator/test_generator_types.py tests/generator/test_scan.py -s -v --durations 0
./venv/bin/py.test tests/generator/test_compression.py tests/generator/test_generator_types.py tests/generator/test_rom.py tests/generator/test_scan.py -s -v --durations 0

View File

@ -0,0 +1,105 @@
name: Ubuntu pools Test
on:
push:
branches:
- main
tags:
- '**'
pull_request:
branches:
- '**'
jobs:
build:
name: Ubuntu pools Test
runs-on: ${{ matrix.os }}
timeout-minutes: 30
strategy:
fail-fast: false
max-parallel: 4
matrix:
python-version: [3.7, 3.8, 3.9]
os: [ubuntu-latest]
steps:
- name: Cancel previous runs on the same branch
if: ${{ github.ref != 'refs/heads/main' }}
uses: styfle/cancel-workflow-action@0.9.0
with:
access_token: ${{ github.token }}
- name: Checkout Code
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Setup Python environment
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Cache npm
uses: actions/cache@v2.1.6
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Get pip cache dir
id: pip-cache
run: |
echo "::set-output name=dir::$(pip cache dir)"
- name: Cache pip
uses: actions/cache@v2.1.6
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/setup.py') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Checkout test blocks and plots
uses: actions/checkout@v2
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
run: |
cd $HOME
ln -s $GITHUB_WORKSPACE/.chia
echo "$HOME/.chia"
ls -al $HOME/.chia
- name: Install ubuntu dependencies
run: |
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python${{ matrix.python-version }}-venv python${{ matrix.python-version }}-distutils git -y
- name: Run install script
env:
INSTALL_PYTHON_VERSION: ${{ matrix.python-version }}
run: |
sh install.sh
- name: Install timelord
run: |
. ./activate
sh install-timelord.sh
./vdf_bench square_asm 400000
- name: Install developer requirements
run: |
. ./activate
venv/bin/python -m pip install pytest pytest-asyncio pytest-xdist
- name: Test pools code with pytest
run: |
. ./activate
./venv/bin/py.test tests/pools/test_pool_config.py tests/pools/test_pool_puzzles_lifecycle.py tests/pools/test_pool_rpc.py tests/pools/test_pool_wallet.py tests/pools/test_wallet_pool_store.py -s -v --durations 0

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory

View File

@ -65,7 +65,7 @@ jobs:
with:
repository: 'Chia-Network/test-cache'
path: '.chia'
ref: '0.26.0'
ref: '0.27.0'
fetch-depth: 1
- name: Link home directory
@ -102,4 +102,4 @@ jobs:
- name: Test wallet code with pytest
run: |
. ./activate
./venv/bin/py.test tests/wallet/test_backup.py tests/wallet/test_bech32m.py tests/wallet/test_chialisp.py tests/wallet/test_puzzle_store.py tests/wallet/test_singleton.py tests/wallet/test_taproot.py tests/wallet/test_wallet.py tests/wallet/test_wallet_store.py -s -v --durations 0
./venv/bin/py.test tests/wallet/test_backup.py tests/wallet/test_bech32m.py tests/wallet/test_chialisp.py tests/wallet/test_puzzle_store.py tests/wallet/test_singleton.py tests/wallet/test_singleton_lifecycle.py tests/wallet/test_singleton_lifecycle_fast.py tests/wallet/test_taproot.py tests/wallet/test_wallet.py tests/wallet/test_wallet_interested_store.py tests/wallet/test_wallet_store.py -s -v --durations 0

2
.gitmodules vendored
View File

@ -1,7 +1,7 @@
[submodule "chia-blockchain-gui"]
path = chia-blockchain-gui
url = https://github.com/Chia-Network/chia-blockchain-gui.git
branch = main
branch = pools
[submodule "mozilla-ca"]
path = mozilla-ca
url = https://github.com/Chia-Network/mozilla-ca.git

View File

@ -80,6 +80,7 @@ git status
Write-Output " ---"
Write-Output "Prepare Electron packager"
Write-Output " ---"
$Env:NODE_OPTIONS = "--max-old-space-size=3000"
npm install --save-dev electron-winstaller
npm install -g electron-packager
npm install

@ -1 +1 @@
Subproject commit 444c6966fe50183c8d72cbc972c5403db341739c
Subproject commit aee851c839e1b0d4a8c8d8d2265cffce94ff2481

5
chia/clvm/singleton.py Normal file
View File

@ -0,0 +1,5 @@
from chia.wallet.puzzles.load_clvm import load_clvm
P2_SINGLETON_MOD = load_clvm("p2_singleton.clvm")
SINGLETON_TOP_LAYER_MOD = load_clvm("singleton_top_layer.clvm")
SINGLETON_LAUNCHER = load_clvm("singleton_launcher.clvm")

View File

@ -11,6 +11,7 @@ from chia.cmds.show import show_cmd
from chia.cmds.start import start_cmd
from chia.cmds.stop import stop_cmd
from chia.cmds.wallet import wallet_cmd
from chia.cmds.plotnft import plotnft_cmd
from chia.util.default_root import DEFAULT_ROOT_PATH
CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
@ -63,6 +64,7 @@ def run_daemon_cmd(ctx: click.Context) -> None:
cli.add_command(keys_cmd)
cli.add_command(plots_cmd)
cli.add_command(wallet_cmd)
cli.add_command(plotnft_cmd)
cli.add_command(configure_cmd)
cli.add_command(init_cmd)
cli.add_command(show_cmd)

View File

@ -6,7 +6,6 @@ from chia.cmds.units import units
from chia.consensus.block_record import BlockRecord
from chia.rpc.farmer_rpc_client import FarmerRpcClient
from chia.rpc.full_node_rpc_client import FullNodeRpcClient
from chia.rpc.harvester_rpc_client import HarvesterRpcClient
from chia.rpc.wallet_rpc_client import WalletRpcClient
from chia.util.config import load_config
from chia.util.default_root import DEFAULT_ROOT_PATH
@ -17,25 +16,22 @@ from chia.util.misc import format_minutes
SECONDS_PER_BLOCK = (24 * 3600) / 4608
async def get_plots(harvester_rpc_port: int) -> Optional[Dict[str, Any]]:
plots = None
async def get_plots(farmer_rpc_port: int) -> Optional[Dict[str, Any]]:
try:
config = load_config(DEFAULT_ROOT_PATH, "config.yaml")
self_hostname = config["self_hostname"]
if harvester_rpc_port is None:
harvester_rpc_port = config["harvester"]["rpc_port"]
harvester_client = await HarvesterRpcClient.create(
self_hostname, uint16(harvester_rpc_port), DEFAULT_ROOT_PATH, config
)
plots = await harvester_client.get_plots()
if farmer_rpc_port is None:
farmer_rpc_port = config["farmer"]["rpc_port"]
farmer_client = await FarmerRpcClient.create(self_hostname, uint16(farmer_rpc_port), DEFAULT_ROOT_PATH, config)
plots = await farmer_client.get_plots()
except Exception as e:
if isinstance(e, aiohttp.ClientConnectorError):
print(f"Connection error. Check if harvester is running at {harvester_rpc_port}")
print(f"Connection error. Check if farmer is running at {farmer_rpc_port}")
else:
print(f"Exception from 'harvester' {e}")
harvester_client.close()
await harvester_client.await_closed()
return None
farmer_client.close()
await farmer_client.await_closed()
return plots
@ -182,7 +178,7 @@ async def challenges(farmer_rpc_port: int, limit: int) -> None:
async def summary(rpc_port: int, wallet_rpc_port: int, harvester_rpc_port: int, farmer_rpc_port: int) -> None:
plots = await get_plots(harvester_rpc_port)
all_plots = await get_plots(farmer_rpc_port)
blockchain_state = await get_blockchain_state(rpc_port)
farmer_running = await is_farmer_running(farmer_rpc_port)
@ -216,10 +212,19 @@ async def summary(rpc_port: int, wallet_rpc_port: int, harvester_rpc_port: int,
print(f"Last height farmed: {amounts['last_height_farmed']}")
total_plot_size = 0
if plots is not None:
total_plot_size = sum(map(lambda x: x["file_size"], plots["plots"]))
total_plots = 0
if all_plots is not None:
for harvester_ip, plots in all_plots.items():
if harvester_ip == "success":
# This key is just "success": True
continue
total_plot_size_harvester = sum(map(lambda x: x["file_size"], plots["plots"]))
total_plot_size += total_plot_size_harvester
total_plots += len(plots["plots"])
print(f"Harvester {harvester_ip}:")
print(f" {len(plots['plots'])} plots of size: {format_bytes(total_plot_size_harvester)}")
print(f"Plot count: {len(plots['plots'])}")
print(f"Plot count for all harvesters: {total_plots}")
print("Total size of plots: ", end="")
print(format_bytes(total_plot_size))
@ -234,11 +239,11 @@ async def summary(rpc_port: int, wallet_rpc_port: int, harvester_rpc_port: int,
print("Estimated network space: Unknown")
minutes = -1
if blockchain_state is not None and plots is not None:
if blockchain_state is not None and all_plots is not None:
proportion = total_plot_size / blockchain_state["space"] if blockchain_state["space"] else -1
minutes = int((await get_average_block_time(rpc_port) / 60) / proportion) if proportion else -1
if plots is not None and len(plots["plots"]) == 0:
if all_plots is not None and total_plots == 0:
print("Expected time to win: Never (no plots)")
else:
print("Expected time to win: " + format_minutes(minutes))

View File

@ -49,7 +49,7 @@ def add_private_key_seed(mnemonic: str):
passphrase = ""
sk = keychain.add_private_key(mnemonic, passphrase)
fingerprint = sk.get_g1().get_fingerprint()
print(f"Added private key with public key fingerprint {fingerprint} and mnemonic")
print(f"Added private key with public key fingerprint {fingerprint}")
except ValueError as e:
print(e)

142
chia/cmds/plotnft.py Normal file
View File

@ -0,0 +1,142 @@
import click
@click.group("plotnft", short_help="Manage your plot NFTs")
def plotnft_cmd() -> None:
pass
@plotnft_cmd.command("show", short_help="Show plotnft information")
@click.option(
"-wp",
"--wallet-rpc-port",
help="Set the port where the Wallet is hosting the RPC interface. See the rpc_port under wallet in config.yaml",
type=int,
default=None,
)
@click.option("-i", "--id", help="ID of the wallet to use", type=int, default=None, show_default=True, required=False)
@click.option("-f", "--fingerprint", help="Set the fingerprint to specify which wallet to use", type=int)
def show_cmd(wallet_rpc_port: int, fingerprint: int, id: int) -> None:
import asyncio
from .wallet_funcs import execute_with_wallet
from .plotnft_funcs import show
asyncio.run(execute_with_wallet(wallet_rpc_port, fingerprint, {"id": id}, show))
@plotnft_cmd.command(
"get_login_link", short_help="Create a login link for a pool. To get the launcher id, use plotnft show."
)
@click.option("-l", "--launcher_id", help="Launcher ID of the plotnft", type=str, required=True)
def get_login_link_cmd(launcher_id: str) -> None:
import asyncio
from .plotnft_funcs import get_login_link
asyncio.run(get_login_link(launcher_id))
@plotnft_cmd.command("create", short_help="Create a plot NFT")
@click.option("-y", "--yes", help="No prompts", is_flag=True)
@click.option("-f", "--fingerprint", help="Set the fingerprint to specify which wallet to use", type=int)
@click.option("-u", "--pool_url", help="HTTPS host:port of the pool to join", type=str, required=False)
@click.option("-s", "--state", help="Initial state of Plot NFT: local or pool", type=str, required=True)
@click.option(
"-wp",
"--wallet-rpc-port",
help="Set the port where the Wallet is hosting the RPC interface. See the rpc_port under wallet in config.yaml",
type=int,
default=None,
)
def create_cmd(wallet_rpc_port: int, fingerprint: int, pool_url: str, state: str, yes: bool) -> None:
import asyncio
from .wallet_funcs import execute_with_wallet
from .plotnft_funcs import create
if pool_url is not None and state.lower() == "local":
print(f" pool_url argument [{pool_url}] is not allowed when creating in 'local' state")
return
if pool_url in [None, ""] and state.lower() == "pool":
print(" pool_url argument (-u) is required for pool starting state")
return
valid_initial_states = {"pool": "FARMING_TO_POOL", "local": "SELF_POOLING"}
extra_params = {"pool_url": pool_url, "state": valid_initial_states[state], "yes": yes}
asyncio.run(execute_with_wallet(wallet_rpc_port, fingerprint, extra_params, create))
@plotnft_cmd.command("join", short_help="Join a plot NFT to a Pool")
@click.option("-y", "--yes", help="No prompts", is_flag=True)
@click.option("-i", "--id", help="ID of the wallet to use", type=int, default=None, show_default=True, required=True)
@click.option("-f", "--fingerprint", help="Set the fingerprint to specify which wallet to use", type=int)
@click.option("-u", "--pool_url", help="HTTPS host:port of the pool to join", type=str, required=True)
@click.option(
"-wp",
"--wallet-rpc-port",
help="Set the port where the Wallet is hosting the RPC interface. See the rpc_port under wallet in config.yaml",
type=int,
default=None,
)
def join_cmd(wallet_rpc_port: int, fingerprint: int, id: int, pool_url: str, yes: bool) -> None:
import asyncio
from .wallet_funcs import execute_with_wallet
from .plotnft_funcs import join_pool
extra_params = {"pool_url": pool_url, "id": id, "yes": yes}
asyncio.run(execute_with_wallet(wallet_rpc_port, fingerprint, extra_params, join_pool))
@plotnft_cmd.command("leave", short_help="Make a plot NFT and return to self-farming")
@click.option("-y", "--yes", help="No prompts", is_flag=True)
@click.option("-i", "--id", help="ID of the wallet to use", type=int, default=None, show_default=True, required=True)
@click.option("-f", "--fingerprint", help="Set the fingerprint to specify which wallet to use", type=int)
@click.option(
"-wp",
"--wallet-rpc-port",
help="Set the port where the Wallet is hosting the RPC interface. See the rpc_port under wallet in config.yaml",
type=int,
default=None,
)
def self_pool_cmd(wallet_rpc_port: int, fingerprint: int, id: int, yes: bool) -> None:
import asyncio
from .wallet_funcs import execute_with_wallet
from .plotnft_funcs import self_pool
extra_params = {"id": id, "yes": yes}
asyncio.run(execute_with_wallet(wallet_rpc_port, fingerprint, extra_params, self_pool))
@plotnft_cmd.command("inspect", short_help="Get Detailed plotnft information as JSON")
@click.option("-i", "--id", help="ID of the wallet to use", type=int, default=None, show_default=True, required=True)
@click.option("-f", "--fingerprint", help="Set the fingerprint to specify which wallet to use", type=int)
@click.option(
"-wp",
"--wallet-rpc-port",
help="Set the port where the Wallet is hosting the RPC interface. See the rpc_port under wallet in config.yaml",
type=int,
default=None,
)
def inspect(wallet_rpc_port: int, fingerprint: int, id: int) -> None:
import asyncio
from .wallet_funcs import execute_with_wallet
from .plotnft_funcs import inspect_cmd
extra_params = {"id": id}
asyncio.run(execute_with_wallet(wallet_rpc_port, fingerprint, extra_params, inspect_cmd))
@plotnft_cmd.command("claim", short_help="Claim rewards from a plot NFT")
@click.option("-i", "--id", help="ID of the wallet to use", type=int, default=None, show_default=True, required=True)
@click.option("-f", "--fingerprint", help="Set the fingerprint to specify which wallet to use", type=int)
@click.option(
"-wp",
"--wallet-rpc-port",
help="Set the port where the Wallet is hosting the RPC interface. See the rpc_port under wallet in config.yaml",
type=int,
default=None,
)
def claim(wallet_rpc_port: int, fingerprint: int, id: int) -> None:
import asyncio
from .wallet_funcs import execute_with_wallet
from .plotnft_funcs import claim_cmd
extra_params = {"id": id}
asyncio.run(execute_with_wallet(wallet_rpc_port, fingerprint, extra_params, claim_cmd))

327
chia/cmds/plotnft_funcs.py Normal file
View File

@ -0,0 +1,327 @@
import aiohttp
import asyncio
import functools
import json
import time
from pprint import pprint
from typing import List, Dict, Optional, Callable
from chia.cmds.wallet_funcs import print_balance, wallet_coin_unit
from chia.pools.pool_wallet_info import PoolWalletInfo, PoolSingletonState
from chia.protocols.pool_protocol import POOL_PROTOCOL_VERSION
from chia.rpc.farmer_rpc_client import FarmerRpcClient
from chia.rpc.wallet_rpc_client import WalletRpcClient
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.util.bech32m import encode_puzzle_hash
from chia.util.byte_types import hexstr_to_bytes
from chia.util.config import load_config
from chia.util.default_root import DEFAULT_ROOT_PATH
from chia.util.ints import uint16, uint32
from chia.wallet.transaction_record import TransactionRecord
from chia.wallet.util.wallet_types import WalletType
async def create_pool_args(pool_url: str) -> Dict:
try:
async with aiohttp.ClientSession() as session:
async with session.get(f"{pool_url}/pool_info") as response:
if response.ok:
json_dict = json.loads(await response.text())
else:
raise ValueError(f"Response from {pool_url} not OK: {response.status}")
except Exception as e:
raise ValueError(f"Error connecting to pool {pool_url}: {e}")
if json_dict["relative_lock_height"] > 1000:
raise ValueError("Relative lock height too high for this pool, cannot join")
if json_dict["protocol_version"] != POOL_PROTOCOL_VERSION:
raise ValueError(f"Incorrect version: {json_dict['protocol_version']}, should be {POOL_PROTOCOL_VERSION}")
header_msg = f"\n---- Pool parameters fetched from {pool_url} ----"
print(header_msg)
pprint(json_dict)
print("-" * len(header_msg))
return json_dict
async def create(args: dict, wallet_client: WalletRpcClient, fingerprint: int) -> None:
state = args["state"]
prompt = not args.get("yes", False)
# Could use initial_pool_state_from_dict to simplify
if state == "SELF_POOLING":
pool_url: Optional[str] = None
relative_lock_height = uint32(0)
target_puzzle_hash = None # wallet will fill this in
elif state == "FARMING_TO_POOL":
pool_url = str(args["pool_url"])
json_dict = await create_pool_args(pool_url)
relative_lock_height = json_dict["relative_lock_height"]
target_puzzle_hash = hexstr_to_bytes(json_dict["target_puzzle_hash"])
else:
raise ValueError("Plot NFT must be created in SELF_POOLING or FARMING_TO_POOL state.")
pool_msg = f" and join pool: {pool_url}" if pool_url else ""
print(f"Will create a plot NFT{pool_msg}.")
if prompt:
user_input: str = input("Confirm [n]/y: ")
else:
user_input = "yes"
if user_input.lower() == "y" or user_input.lower() == "yes":
try:
tx_record: TransactionRecord = await wallet_client.create_new_pool_wallet(
target_puzzle_hash,
pool_url,
relative_lock_height,
"localhost:5000",
"new",
state,
)
start = time.time()
while time.time() - start < 10:
await asyncio.sleep(0.1)
tx = await wallet_client.get_transaction(str(1), tx_record.name)
if len(tx.sent_to) > 0:
print(f"Transaction submitted to nodes: {tx.sent_to}")
print(f"Do chia wallet get_transaction -f {fingerprint} -tx 0x{tx_record.name} to get status")
return None
except Exception as e:
print(f"Error creating plot NFT: {e}")
return
print("Aborting.")
async def pprint_pool_wallet_state(
wallet_client: WalletRpcClient,
wallet_id: int,
pool_wallet_info: PoolWalletInfo,
address_prefix: str,
pool_state_dict: Dict,
unconfirmed_transactions: List[TransactionRecord],
):
if pool_wallet_info.current.state == PoolSingletonState.LEAVING_POOL and pool_wallet_info.target is None:
expected_leave_height = pool_wallet_info.singleton_block_height + pool_wallet_info.current.relative_lock_height
print(f"Current state: INVALID_STATE. Please leave/join again after block height {expected_leave_height}")
else:
print(f"Current state: {PoolSingletonState(pool_wallet_info.current.state).name}")
print(f"Current state from block height: {pool_wallet_info.singleton_block_height}")
print(f"Launcher ID: {pool_wallet_info.launcher_id}")
print(
"Target address (not for plotting): "
f"{encode_puzzle_hash(pool_wallet_info.current.target_puzzle_hash, address_prefix)}"
)
print(f"Owner public key: {pool_wallet_info.current.owner_pubkey}")
print(
f"P2 singleton address (pool contract address for plotting): "
f"{encode_puzzle_hash(pool_wallet_info.p2_singleton_puzzle_hash, address_prefix)}"
)
if pool_wallet_info.target is not None:
print(f"Target state: {PoolSingletonState(pool_wallet_info.target.state).name}")
print(f"Target pool URL: {pool_wallet_info.target.pool_url}")
if pool_wallet_info.current.state == PoolSingletonState.SELF_POOLING.value:
balances: Dict = await wallet_client.get_wallet_balance(str(wallet_id))
balance = balances["confirmed_wallet_balance"]
typ = WalletType(int(WalletType.POOLING_WALLET))
address_prefix, scale = wallet_coin_unit(typ, address_prefix)
print(f"Claimable balance: {print_balance(balance, scale, address_prefix)}")
if pool_wallet_info.current.state == PoolSingletonState.FARMING_TO_POOL:
print(f"Current pool URL: {pool_wallet_info.current.pool_url}")
if pool_wallet_info.launcher_id in pool_state_dict:
print(f"Current difficulty: {pool_state_dict[pool_wallet_info.launcher_id]['current_difficulty']}")
print(f"Points balance: {pool_state_dict[pool_wallet_info.launcher_id]['current_points']}")
print(f"Relative lock height: {pool_wallet_info.current.relative_lock_height} blocks")
payout_instructions: str = pool_state_dict[pool_wallet_info.launcher_id]["pool_config"]["payout_instructions"]
try:
payout_address = encode_puzzle_hash(bytes32.fromhex(payout_instructions), address_prefix)
print(f"Payout instructions (pool will pay to this address): {payout_address}")
except Exception:
print(f"Payout instructions (pool will pay you with this): {payout_instructions}")
if pool_wallet_info.current.state == PoolSingletonState.LEAVING_POOL:
expected_leave_height = pool_wallet_info.singleton_block_height + pool_wallet_info.current.relative_lock_height
if pool_wallet_info.target is not None:
print(f"Expected to leave after block height: {expected_leave_height}")
async def show(args: dict, wallet_client: WalletRpcClient, fingerprint: int) -> None:
config = load_config(DEFAULT_ROOT_PATH, "config.yaml")
self_hostname = config["self_hostname"]
farmer_rpc_port = config["farmer"]["rpc_port"]
farmer_client = await FarmerRpcClient.create(self_hostname, uint16(farmer_rpc_port), DEFAULT_ROOT_PATH, config)
address_prefix = config["network_overrides"]["config"][config["selected_network"]]["address_prefix"]
summaries_response = await wallet_client.get_wallets()
wallet_id_passed_in = args.get("id", None)
try:
pool_state_list: List = (await farmer_client.get_pool_state())["pool_state"]
except Exception as e:
if isinstance(e, aiohttp.ClientConnectorError):
print(
f"Connection error. Check if farmer is running at {farmer_rpc_port}."
f" You can run the farmer by:\n chia start farmer-only"
)
else:
print(f"Exception from 'wallet' {e}")
farmer_client.close()
await farmer_client.await_closed()
return
pool_state_dict: Dict[bytes32, Dict] = {
hexstr_to_bytes(pool_state_item["pool_config"]["launcher_id"]): pool_state_item
for pool_state_item in pool_state_list
}
if wallet_id_passed_in is not None:
for summary in summaries_response:
typ = WalletType(int(summary["type"]))
if summary["id"] == wallet_id_passed_in and typ != WalletType.POOLING_WALLET:
print(f"Wallet with id: {wallet_id_passed_in} is not a pooling wallet. Please provide a different id.")
return
pool_wallet_info, unconfirmed_transactions = await wallet_client.pw_status(wallet_id_passed_in)
await pprint_pool_wallet_state(
wallet_client,
wallet_id_passed_in,
pool_wallet_info,
address_prefix,
pool_state_dict,
unconfirmed_transactions,
)
else:
print(f"Wallet height: {await wallet_client.get_height_info()}")
print(f"Sync status: {'Synced' if (await wallet_client.get_synced()) else 'Not synced'}")
for summary in summaries_response:
wallet_id = summary["id"]
typ = WalletType(int(summary["type"]))
if typ == WalletType.POOLING_WALLET:
print(f"Wallet id {wallet_id}: ")
pool_wallet_info, unconfirmed_transactions = await wallet_client.pw_status(wallet_id)
await pprint_pool_wallet_state(
wallet_client,
wallet_id,
pool_wallet_info,
address_prefix,
pool_state_dict,
unconfirmed_transactions,
)
print("")
farmer_client.close()
await farmer_client.await_closed()
async def get_login_link(launcher_id_str: str) -> None:
launcher_id: bytes32 = hexstr_to_bytes(launcher_id_str)
config = load_config(DEFAULT_ROOT_PATH, "config.yaml")
self_hostname = config["self_hostname"]
farmer_rpc_port = config["farmer"]["rpc_port"]
farmer_client = await FarmerRpcClient.create(self_hostname, uint16(farmer_rpc_port), DEFAULT_ROOT_PATH, config)
try:
login_link: Optional[str] = await farmer_client.get_pool_login_link(launcher_id)
if login_link is None:
print("Was not able to get login link.")
else:
print(login_link)
except Exception as e:
if isinstance(e, aiohttp.ClientConnectorError):
print(
f"Connection error. Check if farmer is running at {farmer_rpc_port}."
f" You can run the farmer by:\n chia start farmer-only"
)
else:
print(f"Exception from 'farmer' {e}")
finally:
farmer_client.close()
await farmer_client.await_closed()
async def submit_tx_with_confirmation(
message: str, prompt: bool, func: Callable, wallet_client: WalletRpcClient, fingerprint: int, wallet_id: int
):
print(message)
if prompt:
user_input: str = input("Confirm [n]/y: ")
else:
user_input = "yes"
if user_input.lower() == "y" or user_input.lower() == "yes":
try:
tx_record: TransactionRecord = await func()
start = time.time()
while time.time() - start < 10:
await asyncio.sleep(0.1)
tx = await wallet_client.get_transaction(str(1), tx_record.name)
if len(tx.sent_to) > 0:
print(f"Transaction submitted to nodes: {tx.sent_to}")
print(f"Do chia wallet get_transaction -f {fingerprint} -tx 0x{tx_record.name} to get status")
return None
except Exception as e:
print(f"Error performing operation on Plot NFT -f {fingerprint} wallet id: {wallet_id}: {e}")
return
print("Aborting.")
async def join_pool(args: dict, wallet_client: WalletRpcClient, fingerprint: int) -> None:
pool_url = args["pool_url"]
wallet_id = args.get("id", None)
prompt = not args.get("yes", False)
try:
async with aiohttp.ClientSession() as session:
async with session.get(f"{pool_url}/pool_info") as response:
if response.ok:
json_dict = json.loads(await response.text())
else:
print(f"Response not OK: {response.status}")
return
except Exception as e:
print(f"Error connecting to pool {pool_url}: {e}")
return
if json_dict["relative_lock_height"] > 1000:
print("Relative lock height too high for this pool, cannot join")
return
if json_dict["protocol_version"] != POOL_PROTOCOL_VERSION:
print(f"Incorrect version: {json_dict['protocol_version']}, should be {POOL_PROTOCOL_VERSION}")
return
pprint(json_dict)
msg = f"\nWill join pool: {pool_url} with Plot NFT {fingerprint}."
func = functools.partial(
wallet_client.pw_join_pool,
wallet_id,
hexstr_to_bytes(json_dict["target_puzzle_hash"]),
pool_url,
json_dict["relative_lock_height"],
)
await submit_tx_with_confirmation(msg, prompt, func, wallet_client, fingerprint, wallet_id)
async def self_pool(args: dict, wallet_client: WalletRpcClient, fingerprint: int) -> None:
wallet_id = args.get("id", None)
prompt = not args.get("yes", False)
msg = f"Will start self-farming with Plot NFT on wallet id {wallet_id} fingerprint {fingerprint}."
func = functools.partial(wallet_client.pw_self_pool, wallet_id)
await submit_tx_with_confirmation(msg, prompt, func, wallet_client, fingerprint, wallet_id)
async def inspect_cmd(args: dict, wallet_client: WalletRpcClient, fingerprint: int) -> None:
wallet_id = args.get("id", None)
pool_wallet_info, unconfirmed_transactions = await wallet_client.pw_status(wallet_id)
print(
{
"pool_wallet_info": pool_wallet_info,
"unconfirmed_transactions": [
{"sent_to": tx.sent_to, "transaction_id": tx.name.hex()} for tx in unconfirmed_transactions
],
}
)
async def claim_cmd(args: dict, wallet_client: WalletRpcClient, fingerprint: int) -> None:
wallet_id = args.get("id", None)
msg = f"\nWill claim rewards for wallet ID: {wallet_id}."
func = functools.partial(
wallet_client.pw_absorb_rewards,
wallet_id,
)
await submit_tx_with_confirmation(msg, False, func, wallet_client, fingerprint, wallet_id)

View File

@ -50,18 +50,14 @@ async def show_async(
total_iters = peak.total_iters if peak is not None else 0
num_blocks: int = 10
if sync_mode:
sync_max_block = blockchain_state["sync"]["sync_tip_height"]
sync_current_block = blockchain_state["sync"]["sync_progress_height"]
print(
"Current Blockchain Status: Full Node syncing to block",
sync_max_block,
"\nCurrently synced to block:",
sync_current_block,
)
if synced:
print("Current Blockchain Status: Full Node Synced")
print("\nPeak: Hash:", peak.header_hash if peak is not None else "")
elif peak is not None and sync_mode:
sync_max_block = blockchain_state["sync"]["sync_tip_height"]
sync_current_block = blockchain_state["sync"]["sync_progress_height"]
print(f"Current Blockchain Status: Syncing {sync_current_block}/{sync_max_block}.")
print("Peak: Hash:", peak.header_hash if peak is not None else "")
elif peak is not None:
print(f"Current Blockchain Status: Not Synced. Peak height: {peak.height}")
else:

View File

@ -120,3 +120,23 @@ def get_address_cmd(wallet_rpc_port: int, id, fingerprint: int) -> None:
from .wallet_funcs import execute_with_wallet, get_address
asyncio.run(execute_with_wallet(wallet_rpc_port, fingerprint, extra_params, get_address))
@wallet_cmd.command(
"delete_unconfirmed_transactions", short_help="Deletes all unconfirmed transactions for this wallet ID"
)
@click.option(
"-wp",
"--wallet-rpc-port",
help="Set the port where the Wallet is hosting the RPC interface. See the rpc_port under wallet in config.yaml",
type=int,
default=None,
)
@click.option("-i", "--id", help="Id of the wallet to use", type=int, default=1, show_default=True, required=True)
@click.option("-f", "--fingerprint", help="Set the fingerprint to specify which wallet to use", type=int)
def delete_unconfirmed_transactions_cmd(wallet_rpc_port: int, id, fingerprint: int) -> None:
extra_params = {"id": id}
import asyncio
from .wallet_funcs import execute_with_wallet, delete_unconfirmed_transactions
asyncio.run(execute_with_wallet(wallet_rpc_port, fingerprint, extra_params, delete_unconfirmed_transactions))

View File

@ -109,6 +109,27 @@ async def get_address(args: dict, wallet_client: WalletRpcClient, fingerprint: i
print(res)
async def delete_unconfirmed_transactions(args: dict, wallet_client: WalletRpcClient, fingerprint: int) -> None:
wallet_id = args["id"]
await wallet_client.delete_unconfirmed_transactions(wallet_id)
print(f"Successfully deleted all unconfirmed transactions for wallet id {wallet_id} on key {fingerprint}")
def wallet_coin_unit(typ: WalletType, address_prefix: str) -> Tuple[str, int]:
if typ == WalletType.COLOURED_COIN:
return "", units["colouredcoin"]
if typ in [WalletType.STANDARD_WALLET, WalletType.POOLING_WALLET, WalletType.MULTI_SIG, WalletType.RATE_LIMITED]:
return address_prefix, units["chia"]
return "", units["mojo"]
def print_balance(amount: int, scale: int, address_prefix: str) -> str:
ret = f"{amount/scale} {address_prefix} "
if scale > 1:
ret += f"({amount} mojo)"
return ret
async def print_balances(args: dict, wallet_client: WalletRpcClient, fingerprint: int) -> None:
summaries_response = await wallet_client.get_wallets()
config = load_config(DEFAULT_ROOT_PATH, "config.yaml")
@ -120,26 +141,14 @@ async def print_balances(args: dict, wallet_client: WalletRpcClient, fingerprint
for summary in summaries_response:
wallet_id = summary["id"]
balances = await wallet_client.get_wallet_balance(wallet_id)
typ = WalletType(int(summary["type"])).name
if typ != "STANDARD_WALLET":
print(f"Wallet ID {wallet_id} type {typ} {summary['name']}")
print(f" -Total Balance: " f"{balances['confirmed_wallet_balance']/units['colouredcoin']}")
print(f" -Pending Total Balance: {balances['unconfirmed_wallet_balance']/units['colouredcoin']}")
print(f" -Spendable Balance: {balances['spendable_balance']/units['colouredcoin']}")
else:
print(f"Wallet ID {wallet_id} type {typ}")
print(
f" -Total Balance: {balances['confirmed_wallet_balance']/units['chia']} {address_prefix} "
f"({balances['confirmed_wallet_balance']} mojo)"
)
print(
f" -Pending Total Balance: {balances['unconfirmed_wallet_balance']/units['chia']} {address_prefix} "
f"({balances['unconfirmed_wallet_balance']} mojo)"
)
print(
f" -Spendable: {balances['spendable_balance']/units['chia']} {address_prefix} "
f"({balances['spendable_balance']} mojo)"
)
typ = WalletType(int(summary["type"]))
address_prefix, scale = wallet_coin_unit(typ, address_prefix)
print(f"Wallet ID {wallet_id} type {typ.name} {summary['name']}")
print(f" -Total Balance: {print_balance(balances['confirmed_wallet_balance'], scale, address_prefix)}")
print(
f" -Pending Total Balance: {print_balance(balances['unconfirmed_wallet_balance'], scale, address_prefix)}"
)
print(f" -Spendable: {print_balance(balances['spendable_balance'], scale, address_prefix)}")
async def get_wallet(wallet_client: WalletRpcClient, fingerprint: int = None) -> Optional[Tuple[WalletRpcClient, int]]:
@ -232,7 +241,10 @@ async def execute_with_wallet(wallet_rpc_port: int, fingerprint: int, extra_para
pass
except Exception as e:
if isinstance(e, aiohttp.ClientConnectorError):
print(f"Connection error. Check if wallet is running at {wallet_rpc_port}")
print(
f"Connection error. Check if the wallet is running at {wallet_rpc_port}. "
"You can run the wallet via:\n\tchia start wallet"
)
else:
print(f"Exception from 'wallet' {e}")
wallet_client.close()

View File

@ -388,6 +388,7 @@ async def validate_block_body(
# This coin is not in the current heaviest chain, so it must be in the fork
if rem not in additions_since_fork:
# Check for spending a coin that does not exist in this fork
log.error(f"Err.UNKNOWN_UNSPENT: COIN ID: {rem} NPC RESULT: {npc_result}")
return Err.UNKNOWN_UNSPENT, None
new_coin, confirmed_height, confirmed_timestamp = additions_since_fork[rem]
new_coin_record: CoinRecord = CoinRecord(

View File

@ -10,7 +10,7 @@ def create_puzzlehash_for_pk(pub_key: G1Element) -> bytes32:
return puzzle_for_pk(pub_key).get_tree_hash()
def pool_parent_id(block_height: uint32, genesis_challenge: bytes32) -> uint32:
def pool_parent_id(block_height: uint32, genesis_challenge: bytes32) -> bytes32:
return bytes32(genesis_challenge[:16] + block_height.to_bytes(16, "big"))

View File

@ -56,6 +56,7 @@ class ConsensusConstants:
NETWORK_TYPE: int
MAX_GENERATOR_SIZE: uint32
MAX_GENERATOR_REF_LIST_SIZE: uint32
POOL_SUB_SLOT_ITERS: uint64
def replace(self, **changes) -> "ConsensusConstants":
return dataclasses.replace(self, **changes)

View File

@ -54,6 +54,7 @@ testnet_kwargs = {
"NETWORK_TYPE": 0,
"MAX_GENERATOR_SIZE": 1000000,
"MAX_GENERATOR_REF_LIST_SIZE": 512, # Number of references allowed in the block generator ref list
"POOL_SUB_SLOT_ITERS": 37600000000, # iters limit * NUM_SPS
}

View File

@ -1,28 +1,55 @@
import asyncio
import json
import logging
import time
from pathlib import Path
from typing import Any, Callable, Dict, List, Optional, Tuple
import traceback
from blspy import G1Element
import aiohttp
from blspy import AugSchemeMPL, G1Element, G2Element, PrivateKey
import chia.server.ws_connection as ws # lgtm [py/import-and-import-from]
from chia.consensus.coinbase import create_puzzlehash_for_pk
from chia.consensus.constants import ConsensusConstants
from chia.pools.pool_config import PoolWalletConfig, load_pool_config
from chia.protocols import farmer_protocol, harvester_protocol
from chia.protocols.pool_protocol import (
ErrorResponse,
get_current_authentication_token,
GetFarmerResponse,
PoolErrorCode,
PostFarmerPayload,
PostFarmerRequest,
PutFarmerPayload,
PutFarmerRequest,
AuthenticationPayload,
)
from chia.protocols.protocol_message_types import ProtocolMessageTypes
from chia.server.outbound_message import NodeType, make_msg
from chia.server.ws_connection import WSChiaConnection
from chia.types.blockchain_format.proof_of_space import ProofOfSpace
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.util.bech32m import decode_puzzle_hash
from chia.util.config import load_config, save_config
from chia.util.ints import uint32, uint64
from chia.util.config import load_config, save_config, config_path_for_filename
from chia.util.hash import std_hash
from chia.util.ints import uint8, uint16, uint32, uint64
from chia.util.keychain import Keychain
from chia.wallet.derive_keys import master_sk_to_farmer_sk, master_sk_to_pool_sk, master_sk_to_wallet_sk
from chia.wallet.derive_keys import (
master_sk_to_farmer_sk,
master_sk_to_pool_sk,
master_sk_to_wallet_sk,
find_authentication_sk,
find_owner_sk,
)
from chia.wallet.puzzles.singleton_top_layer import SINGLETON_MOD
singleton_mod_hash = SINGLETON_MOD.get_tree_hash()
log = logging.getLogger(__name__)
UPDATE_POOL_INFO_INTERVAL: int = 3600
UPDATE_POOL_FARMER_INFO_INTERVAL: int = 300
"""
HARVESTER PROTOCOL (FARMER <-> HARVESTER)
@ -57,15 +84,17 @@ class Farmer:
self.cache_add_time: Dict[bytes32, uint64] = {}
self.cache_clear_task: asyncio.Task
self.update_pool_state_task: asyncio.Task
self.constants = consensus_constants
self._shut_down = False
self.server: Any = None
self.keychain = keychain
self.state_changed_callback: Optional[Callable] = None
self.log = log
all_sks = self.keychain.get_all_private_keys()
self._private_keys = [master_sk_to_farmer_sk(sk) for sk, _ in all_sks] + [
master_sk_to_pool_sk(sk) for sk, _ in all_sks
self.all_root_sks: List[PrivateKey] = [sk for sk, _ in self.keychain.get_all_private_keys()]
self._private_keys = [master_sk_to_farmer_sk(sk) for sk in self.all_root_sks] + [
master_sk_to_pool_sk(sk) for sk in self.all_root_sks
]
if len(self.get_public_keys()) == 0:
@ -78,7 +107,7 @@ class Farmer:
self.pool_public_keys = [G1Element.from_bytes(bytes.fromhex(pk)) for pk in self.config["pool_public_keys"]]
# This is the pool configuration, which should be moved out to the pool once it exists
# This is the self pooling configuration, which is only used for original self-pooled plots
self.pool_target_encoded = pool_config["xch_target_address"]
self.pool_target = decode_puzzle_hash(self.pool_target_encoded)
self.pool_sks_map: Dict = {}
@ -91,7 +120,19 @@ class Farmer:
error_str = "No keys exist. Please run 'chia keys generate' or open the UI."
raise RuntimeError(error_str)
# The variables below are for use with an actual pool
# From p2_singleton_puzzle_hash to pool state dict
self.pool_state: Dict[bytes32, Dict] = {}
# From public key bytes to PrivateKey
self.authentication_keys: Dict[bytes, PrivateKey] = {}
# Last time we updated pool_state based on the config file
self.last_config_access_time: uint64 = uint64(0)
async def _start(self):
self.update_pool_state_task = asyncio.create_task(self._periodically_update_pool_state_task())
self.cache_clear_task = asyncio.create_task(self._periodically_clear_cache_and_refresh_task())
def _close(self):
@ -99,6 +140,7 @@ class Farmer:
async def _await_closed(self):
await self.cache_clear_task
await self.update_pool_state_task
def _set_state_changed_callback(self, callback: Callable):
self.state_changed_callback = callback
@ -121,10 +163,240 @@ class Farmer:
if self.state_changed_callback is not None:
self.state_changed_callback(change, data)
def handle_failed_pool_response(self, p2_singleton_puzzle_hash: bytes32, error_message: str):
self.log.error(error_message)
self.pool_state[p2_singleton_puzzle_hash]["pool_errors_24h"].append(
ErrorResponse(uint16(PoolErrorCode.REQUEST_FAILED.value), error_message).to_json_dict()
)
def on_disconnect(self, connection: ws.WSChiaConnection):
self.log.info(f"peer disconnected {connection.get_peer_info()}")
self.state_changed("close_connection", {})
async def _pool_get_pool_info(self, pool_config: PoolWalletConfig) -> Optional[Dict]:
try:
async with aiohttp.ClientSession(trust_env=True) as session:
async with session.get(f"{pool_config.pool_url}/pool_info") as resp:
if resp.ok:
response: Dict = json.loads(await resp.text())
self.log.info(f"GET /pool_info response: {response}")
return response
else:
self.handle_failed_pool_response(
pool_config.p2_singleton_puzzle_hash,
f"Error in GET /pool_info {pool_config.pool_url}, {resp.status}",
)
except Exception as e:
self.handle_failed_pool_response(
pool_config.p2_singleton_puzzle_hash, f"Exception in GET /pool_info {pool_config.pool_url}, {e}"
)
return None
async def _pool_get_farmer(
self, pool_config: PoolWalletConfig, authentication_token_timeout: uint8, authentication_sk: PrivateKey
) -> Optional[Dict]:
assert authentication_sk.get_g1() == pool_config.authentication_public_key
authentication_token = get_current_authentication_token(authentication_token_timeout)
message: bytes32 = std_hash(
AuthenticationPayload(
"get_farmer", pool_config.launcher_id, pool_config.target_puzzle_hash, authentication_token
)
)
signature: G2Element = AugSchemeMPL.sign(authentication_sk, message)
get_farmer_params = {
"launcher_id": pool_config.launcher_id.hex(),
"authentication_token": authentication_token,
"signature": bytes(signature).hex(),
}
try:
async with aiohttp.ClientSession(trust_env=True) as session:
async with session.get(f"{pool_config.pool_url}/farmer", params=get_farmer_params) as resp:
if resp.ok:
response: Dict = json.loads(await resp.text())
self.log.info(f"GET /farmer response: {response}")
if "error_code" in response:
self.pool_state[pool_config.p2_singleton_puzzle_hash]["pool_errors_24h"].append(response)
return response
else:
self.handle_failed_pool_response(
pool_config.p2_singleton_puzzle_hash,
f"Error in GET /farmer {pool_config.pool_url}, {resp.status}",
)
except Exception as e:
self.handle_failed_pool_response(
pool_config.p2_singleton_puzzle_hash, f"Exception in GET /farmer {pool_config.pool_url}, {e}"
)
return None
async def _pool_post_farmer(
self, pool_config: PoolWalletConfig, authentication_token_timeout: uint8, owner_sk: PrivateKey
) -> Optional[Dict]:
post_farmer_payload: PostFarmerPayload = PostFarmerPayload(
pool_config.launcher_id,
get_current_authentication_token(authentication_token_timeout),
pool_config.authentication_public_key,
pool_config.payout_instructions,
None,
)
assert owner_sk.get_g1() == pool_config.owner_public_key
signature: G2Element = AugSchemeMPL.sign(owner_sk, post_farmer_payload.get_hash())
post_farmer_request = PostFarmerRequest(post_farmer_payload, signature)
post_farmer_body = json.dumps(post_farmer_request.to_json_dict())
headers = {
"content-type": "application/json;",
}
try:
async with aiohttp.ClientSession() as session:
async with session.post(
f"{pool_config.pool_url}/farmer", data=post_farmer_body, headers=headers
) as resp:
if resp.ok:
response: Dict = json.loads(await resp.text())
self.log.info(f"POST /farmer response: {response}")
if "error_code" in response:
self.pool_state[pool_config.p2_singleton_puzzle_hash]["pool_errors_24h"].append(response)
return response
else:
self.handle_failed_pool_response(
pool_config.p2_singleton_puzzle_hash,
f"Error in POST /farmer {pool_config.pool_url}, {resp.status}",
)
except Exception as e:
self.handle_failed_pool_response(
pool_config.p2_singleton_puzzle_hash, f"Exception in POST /farmer {pool_config.pool_url}, {e}"
)
return None
async def _pool_put_farmer(
self, pool_config: PoolWalletConfig, authentication_token_timeout: uint8, owner_sk: PrivateKey
) -> Optional[Dict]:
put_farmer_payload: PutFarmerPayload = PutFarmerPayload(
pool_config.launcher_id,
get_current_authentication_token(authentication_token_timeout),
pool_config.authentication_public_key,
pool_config.payout_instructions,
None,
)
assert owner_sk.get_g1() == pool_config.owner_public_key
signature: G2Element = AugSchemeMPL.sign(owner_sk, put_farmer_payload.get_hash())
put_farmer_request = PutFarmerRequest(put_farmer_payload, signature)
put_farmer_body = json.dumps(put_farmer_request.to_json_dict())
try:
async with aiohttp.ClientSession() as session:
async with session.put(f"{pool_config.pool_url}/farmer", data=put_farmer_body) as resp:
if resp.ok:
response: Dict = json.loads(await resp.text())
self.log.info(f"PUT /farmer response: {response}")
if "error_code" in response:
self.pool_state[pool_config.p2_singleton_puzzle_hash]["pool_errors_24h"].append(response)
return response
else:
self.handle_failed_pool_response(
pool_config.p2_singleton_puzzle_hash,
f"Error in PUT /farmer {pool_config.pool_url}, {resp.status}",
)
except Exception as e:
self.handle_failed_pool_response(
pool_config.p2_singleton_puzzle_hash, f"Exception in PUT /farmer {pool_config.pool_url}, {e}"
)
return None
async def update_pool_state(self):
pool_config_list: List[PoolWalletConfig] = load_pool_config(self._root_path)
for pool_config in pool_config_list:
p2_singleton_puzzle_hash = pool_config.p2_singleton_puzzle_hash
try:
authentication_sk: Optional[PrivateKey] = await find_authentication_sk(
self.all_root_sks, pool_config.authentication_public_key
)
if authentication_sk is None:
self.log.error(f"Could not find authentication sk for pk: {pool_config.authentication_public_key}")
continue
if p2_singleton_puzzle_hash not in self.pool_state:
self.authentication_keys[bytes(pool_config.authentication_public_key)] = authentication_sk
self.pool_state[p2_singleton_puzzle_hash] = {
"points_found_since_start": 0,
"points_found_24h": [],
"points_acknowledged_since_start": 0,
"points_acknowledged_24h": [],
"next_farmer_update": 0,
"next_pool_info_update": 0,
"current_points": 0,
"current_difficulty": None,
"pool_errors_24h": [],
"authentication_token_timeout": None,
}
self.log.info(f"Added pool: {pool_config}")
pool_state = self.pool_state[p2_singleton_puzzle_hash]
pool_state["pool_config"] = pool_config
# Skip state update when self pooling
if pool_config.pool_url == "":
continue
# TODO: Improve error handling below, inform about unexpected failures
if time.time() >= pool_state["next_pool_info_update"]:
# Makes a GET request to the pool to get the updated information
pool_info = await self._pool_get_pool_info(pool_config)
if pool_info is not None and "error_code" not in pool_info:
pool_state["authentication_token_timeout"] = pool_info["authentication_token_timeout"]
pool_state["next_pool_info_update"] = time.time() + UPDATE_POOL_INFO_INTERVAL
# Only update the first time from GET /pool_info, gets updated from GET /farmer later
if pool_state["current_difficulty"] is None:
pool_state["current_difficulty"] = pool_info["minimum_difficulty"]
if time.time() >= pool_state["next_farmer_update"]:
authentication_token_timeout = pool_state["authentication_token_timeout"]
async def update_pool_farmer_info() -> Optional[dict]:
# Run a GET /farmer to see if the farmer is already known by the pool
response = await self._pool_get_farmer(
pool_config, authentication_token_timeout, authentication_sk
)
if response is not None and "error_code" not in response:
farmer_info: GetFarmerResponse = GetFarmerResponse.from_json_dict(response)
pool_state["current_difficulty"] = farmer_info.current_difficulty
pool_state["current_points"] = farmer_info.current_points
pool_state["next_farmer_update"] = time.time() + UPDATE_POOL_FARMER_INFO_INTERVAL
return response
if authentication_token_timeout is not None:
update_response = await update_pool_farmer_info()
is_error = update_response is not None and "error_code" in update_response
if is_error and update_response["error_code"] == PoolErrorCode.FARMER_NOT_KNOWN.value:
# Make the farmer known on the pool with a POST /farmer
owner_sk = await find_owner_sk(self.all_root_sks, pool_config.owner_public_key)
post_response = await self._pool_post_farmer(
pool_config, authentication_token_timeout, owner_sk
)
if post_response is not None and "error_code" not in post_response:
self.log.info(
f"Welcome message from {pool_config.pool_url}: "
f"{post_response['welcome_message']}"
)
# Now we should be able to update the local farmer info
update_response = await update_pool_farmer_info()
if update_response is not None and "error_code" in update_response:
self.log.error(
f"Failed to update farmer info after POST /farmer: "
f"{update_response['error_code']}, "
f"{update_response['error_message']}"
)
else:
self.log.warning(
f"No pool specific authentication_token_timeout has been set for {p2_singleton_puzzle_hash}"
f", check communication with the pool."
)
except Exception as e:
tb = traceback.format_exc()
self.log.error(f"Exception in update_pool_state for {pool_config.pool_url}, {e} {tb}")
def get_public_keys(self):
return [child_sk.get_g1() for child_sk in self._private_keys]
@ -168,6 +440,85 @@ class Farmer:
config["pool"]["xch_target_address"] = pool_target_encoded
save_config(self._root_path, "config.yaml", config)
async def set_payout_instructions(self, launcher_id: bytes32, payout_instructions: str):
for p2_singleton_puzzle_hash, pool_state_dict in self.pool_state.items():
if launcher_id == pool_state_dict["pool_config"].launcher_id:
config = load_config(self._root_path, "config.yaml")
new_list = []
for list_element in config["pool"]["pool_list"]:
if bytes.fromhex(list_element["launcher_id"]) == bytes(launcher_id):
list_element["payout_instructions"] = payout_instructions
new_list.append(list_element)
config["pool"]["pool_list"] = new_list
save_config(self._root_path, "config.yaml", config)
await self.update_pool_state()
return
self.log.warning(f"Launcher id: {launcher_id} not found")
async def generate_login_link(self, launcher_id: bytes32) -> Optional[str]:
for pool_state in self.pool_state.values():
pool_config: PoolWalletConfig = pool_state["pool_config"]
if pool_config.launcher_id == launcher_id:
authentication_sk: Optional[PrivateKey] = await find_authentication_sk(
self.all_root_sks, pool_config.authentication_public_key
)
if authentication_sk is None:
self.log.error(f"Could not find authentication sk for pk: {pool_config.authentication_public_key}")
continue
assert authentication_sk.get_g1() == pool_config.authentication_public_key
authentication_token_timeout = pool_state["authentication_token_timeout"]
authentication_token = get_current_authentication_token(authentication_token_timeout)
message: bytes32 = std_hash(
AuthenticationPayload(
"get_login", pool_config.launcher_id, pool_config.target_puzzle_hash, authentication_token
)
)
signature: G2Element = AugSchemeMPL.sign(authentication_sk, message)
return (
pool_config.pool_url
+ f"/login?launcher_id={launcher_id.hex()}&authentication_token={authentication_token}"
f"&signature={bytes(signature).hex()}"
)
return None
async def get_plots(self) -> Dict:
rpc_response = {}
for connection in self.server.get_connections():
if connection.connection_type == NodeType.HARVESTER:
peer_host = connection.peer_host
peer_port = connection.peer_port
peer_full = f"{peer_host}:{peer_port}"
response = await connection.request_plots(harvester_protocol.RequestPlots(), timeout=5)
if response is None:
self.log.error(
"Harvester did not respond. You might need to update harvester to the latest version"
)
continue
if not isinstance(response, harvester_protocol.RespondPlots):
self.log.error(f"Invalid response from harvester: {peer_host}:{peer_port}")
continue
rpc_response[peer_full] = response.to_json_dict()
return rpc_response
async def _periodically_update_pool_state_task(self):
time_slept: uint64 = uint64(0)
config_path: Path = config_path_for_filename(self._root_path, "config.yaml")
while not self._shut_down:
# Every time the config file changes, read it to check the pool state
stat_info = config_path.stat()
if stat_info.st_mtime > self.last_config_access_time:
self.last_config_access_time = stat_info.st_mtime
await self.update_pool_state()
time_slept = uint64(0)
elif time_slept > 60:
await self.update_pool_state()
time_slept = uint64(0)
time_slept += 1
await asyncio.sleep(1)
async def _periodically_clear_cache_and_refresh_task(self):
time_slept: uint64 = uint64(0)
refresh_slept = 0

View File

@ -1,12 +1,21 @@
import json
import time
from typing import Callable, Optional
from typing import Callable, Optional, List, Any, Dict
from blspy import AugSchemeMPL, G2Element
import aiohttp
from blspy import AugSchemeMPL, G2Element, PrivateKey
import chia.server.ws_connection as ws
from chia.consensus.pot_iterations import calculate_iterations_quality, calculate_sp_interval_iters
from chia.farmer.farmer import Farmer
from chia.protocols import farmer_protocol, harvester_protocol
from chia.protocols.harvester_protocol import PoolDifficulty
from chia.protocols.pool_protocol import (
get_current_authentication_token,
PoolErrorCode,
PostPartialRequest,
PostPartialPayload,
)
from chia.protocols.protocol_message_types import ProtocolMessageTypes
from chia.server.outbound_message import NodeType, make_msg
from chia.types.blockchain_format.pool_target import PoolTarget
@ -73,41 +82,171 @@ class FarmerAPI:
sp.difficulty,
new_proof_of_space.sp_hash,
)
# Double check that the iters are good
assert required_iters < calculate_sp_interval_iters(self.farmer.constants, sp.sub_slot_iters)
# Proceed at getting the signatures for this PoSpace
request = harvester_protocol.RequestSignatures(
new_proof_of_space.plot_identifier,
new_proof_of_space.challenge_hash,
new_proof_of_space.sp_hash,
[sp.challenge_chain_sp, sp.reward_chain_sp],
)
# If the iters are good enough to make a block, proceed with the block making flow
if required_iters < calculate_sp_interval_iters(self.farmer.constants, sp.sub_slot_iters):
# Proceed at getting the signatures for this PoSpace
request = harvester_protocol.RequestSignatures(
new_proof_of_space.plot_identifier,
new_proof_of_space.challenge_hash,
new_proof_of_space.sp_hash,
[sp.challenge_chain_sp, sp.reward_chain_sp],
)
if new_proof_of_space.sp_hash not in self.farmer.proofs_of_space:
self.farmer.proofs_of_space[new_proof_of_space.sp_hash] = [
(
new_proof_of_space.plot_identifier,
new_proof_of_space.proof,
)
]
else:
if new_proof_of_space.sp_hash not in self.farmer.proofs_of_space:
self.farmer.proofs_of_space[new_proof_of_space.sp_hash] = []
self.farmer.proofs_of_space[new_proof_of_space.sp_hash].append(
(
new_proof_of_space.plot_identifier,
new_proof_of_space.proof,
)
)
self.farmer.cache_add_time[new_proof_of_space.sp_hash] = uint64(int(time.time()))
self.farmer.quality_str_to_identifiers[computed_quality_string] = (
new_proof_of_space.plot_identifier,
new_proof_of_space.challenge_hash,
new_proof_of_space.sp_hash,
peer.peer_node_id,
)
self.farmer.cache_add_time[computed_quality_string] = uint64(int(time.time()))
self.farmer.cache_add_time[new_proof_of_space.sp_hash] = uint64(int(time.time()))
self.farmer.quality_str_to_identifiers[computed_quality_string] = (
new_proof_of_space.plot_identifier,
new_proof_of_space.challenge_hash,
new_proof_of_space.sp_hash,
peer.peer_node_id,
)
self.farmer.cache_add_time[computed_quality_string] = uint64(int(time.time()))
return make_msg(ProtocolMessageTypes.request_signatures, request)
await peer.send_message(make_msg(ProtocolMessageTypes.request_signatures, request))
p2_singleton_puzzle_hash = new_proof_of_space.proof.pool_contract_puzzle_hash
if p2_singleton_puzzle_hash is not None:
# Otherwise, send the proof of space to the pool
# When we win a block, we also send the partial to the pool
if p2_singleton_puzzle_hash not in self.farmer.pool_state:
self.farmer.log.info(f"Did not find pool info for {p2_singleton_puzzle_hash}")
return
pool_state_dict: Dict = self.farmer.pool_state[p2_singleton_puzzle_hash]
pool_url = pool_state_dict["pool_config"].pool_url
if pool_url == "":
return
if pool_state_dict["current_difficulty"] is None:
self.farmer.log.warning(
f"No pool specific difficulty has been set for {p2_singleton_puzzle_hash}, "
f"check communication with the pool, skipping this partial to {pool_url}."
)
return
required_iters = calculate_iterations_quality(
self.farmer.constants.DIFFICULTY_CONSTANT_FACTOR,
computed_quality_string,
new_proof_of_space.proof.size,
pool_state_dict["current_difficulty"],
new_proof_of_space.sp_hash,
)
if required_iters >= calculate_sp_interval_iters(
self.farmer.constants, self.farmer.constants.POOL_SUB_SLOT_ITERS
):
self.farmer.log.info(
f"Proof of space not good enough for pool {pool_url}: {pool_state_dict['current_difficulty']}"
)
return
authentication_token_timeout = pool_state_dict["authentication_token_timeout"]
if authentication_token_timeout is None:
self.farmer.log.warning(
f"No pool specific authentication_token_timeout has been set for {p2_singleton_puzzle_hash}"
f", check communication with the pool."
)
return
# Submit partial to pool
is_eos = new_proof_of_space.signage_point_index == 0
payload = PostPartialPayload(
pool_state_dict["pool_config"].launcher_id,
get_current_authentication_token(authentication_token_timeout),
new_proof_of_space.proof,
new_proof_of_space.sp_hash,
is_eos,
peer.peer_node_id,
)
# The plot key is 2/2 so we need the harvester's half of the signature
m_to_sign = payload.get_hash()
request = harvester_protocol.RequestSignatures(
new_proof_of_space.plot_identifier,
new_proof_of_space.challenge_hash,
new_proof_of_space.sp_hash,
[m_to_sign],
)
response: Any = await peer.request_signatures(request)
if not isinstance(response, harvester_protocol.RespondSignatures):
self.farmer.log.error(f"Invalid response from harvester: {response}")
return
assert len(response.message_signatures) == 1
plot_signature: Optional[G2Element] = None
for sk in self.farmer.get_private_keys():
pk = sk.get_g1()
if pk == response.farmer_pk:
agg_pk = ProofOfSpace.generate_plot_public_key(response.local_pk, pk, True)
assert agg_pk == new_proof_of_space.proof.plot_public_key
sig_farmer = AugSchemeMPL.sign(sk, m_to_sign, agg_pk)
taproot_sk: PrivateKey = ProofOfSpace.generate_taproot_sk(response.local_pk, pk)
taproot_sig: G2Element = AugSchemeMPL.sign(taproot_sk, m_to_sign, agg_pk)
plot_signature = AugSchemeMPL.aggregate(
[sig_farmer, response.message_signatures[0][1], taproot_sig]
)
assert AugSchemeMPL.verify(agg_pk, m_to_sign, plot_signature)
authentication_pk = pool_state_dict["pool_config"].authentication_public_key
if bytes(authentication_pk) is None:
self.farmer.log.error(f"No authentication sk for {authentication_pk}")
return
authentication_sk: PrivateKey = self.farmer.authentication_keys[bytes(authentication_pk)]
authentication_signature = AugSchemeMPL.sign(authentication_sk, m_to_sign)
assert plot_signature is not None
agg_sig: G2Element = AugSchemeMPL.aggregate([plot_signature, authentication_signature])
post_partial_request: PostPartialRequest = PostPartialRequest(payload, agg_sig)
post_partial_body = json.dumps(post_partial_request.to_json_dict())
self.farmer.log.info(
f"Submitting partial for {post_partial_request.payload.launcher_id.hex()} to {pool_url}"
)
pool_state_dict["points_found_since_start"] += pool_state_dict["current_difficulty"]
pool_state_dict["points_found_24h"].append((time.time(), pool_state_dict["current_difficulty"]))
headers = {
"content-type": "application/json;",
}
try:
async with aiohttp.ClientSession() as session:
async with session.post(f"{pool_url}/partial", data=post_partial_body, headers=headers) as resp:
if resp.ok:
pool_response: Dict = json.loads(await resp.text())
self.farmer.log.info(f"Pool response: {pool_response}")
if "error_code" in pool_response:
self.farmer.log.error(
f"Error in pooling: "
f"{pool_response['error_code'], pool_response['error_message']}"
)
pool_state_dict["pool_errors_24h"].append(pool_response)
if pool_response["error_code"] == PoolErrorCode.PROOF_NOT_GOOD_ENOUGH.value:
self.farmer.log.error(
"Partial not good enough, forcing pool farmer update to "
"get our current difficulty."
)
pool_state_dict["next_farmer_update"] = 0
await self.farmer.update_pool_state()
else:
new_difficulty = pool_response["new_difficulty"]
pool_state_dict["points_acknowledged_since_start"] += new_difficulty
pool_state_dict["points_acknowledged_24h"].append((time.time(), new_difficulty))
pool_state_dict["current_difficulty"] = new_difficulty
else:
self.farmer.log.error(f"Error sending partial to {pool_url}, {resp.status}")
except Exception as e:
self.farmer.log.error(f"Error connecting to pool: {e}")
return
return
@api_request
async def respond_signatures(self, response: harvester_protocol.RespondSignatures):
@ -134,6 +273,7 @@ class FarmerAPI:
if plot_identifier == response.plot_identifier:
pospace = candidate_pospace
assert pospace is not None
include_taproot: bool = pospace.pool_contract_puzzle_hash is not None
computed_quality_string = pospace.verify_and_get_quality_string(
self.farmer.constants, response.challenge_hash, response.sp_hash
@ -151,15 +291,26 @@ class FarmerAPI:
for sk in self.farmer.get_private_keys():
pk = sk.get_g1()
if pk == response.farmer_pk:
agg_pk = ProofOfSpace.generate_plot_public_key(response.local_pk, pk)
agg_pk = ProofOfSpace.generate_plot_public_key(response.local_pk, pk, include_taproot)
assert agg_pk == pospace.plot_public_key
if include_taproot:
taproot_sk: PrivateKey = ProofOfSpace.generate_taproot_sk(response.local_pk, pk)
taproot_share_cc_sp: G2Element = AugSchemeMPL.sign(taproot_sk, challenge_chain_sp, agg_pk)
taproot_share_rc_sp: G2Element = AugSchemeMPL.sign(taproot_sk, reward_chain_sp, agg_pk)
else:
taproot_share_cc_sp = G2Element()
taproot_share_rc_sp = G2Element()
farmer_share_cc_sp = AugSchemeMPL.sign(sk, challenge_chain_sp, agg_pk)
agg_sig_cc_sp = AugSchemeMPL.aggregate([challenge_chain_sp_harv_sig, farmer_share_cc_sp])
agg_sig_cc_sp = AugSchemeMPL.aggregate(
[challenge_chain_sp_harv_sig, farmer_share_cc_sp, taproot_share_cc_sp]
)
assert AugSchemeMPL.verify(agg_pk, challenge_chain_sp, agg_sig_cc_sp)
# This means it passes the sp filter
farmer_share_rc_sp = AugSchemeMPL.sign(sk, reward_chain_sp, agg_pk)
agg_sig_rc_sp = AugSchemeMPL.aggregate([reward_chain_sp_harv_sig, farmer_share_rc_sp])
agg_sig_rc_sp = AugSchemeMPL.aggregate(
[reward_chain_sp_harv_sig, farmer_share_rc_sp, taproot_share_rc_sp]
)
assert AugSchemeMPL.verify(agg_pk, reward_chain_sp, agg_sig_rc_sp)
if pospace.pool_public_key is not None:
@ -211,13 +362,30 @@ class FarmerAPI:
) = response.message_signatures[1]
pk = sk.get_g1()
if pk == response.farmer_pk:
agg_pk = ProofOfSpace.generate_plot_public_key(response.local_pk, pk)
agg_pk = ProofOfSpace.generate_plot_public_key(response.local_pk, pk, include_taproot)
assert agg_pk == pospace.plot_public_key
if include_taproot:
taproot_sk = ProofOfSpace.generate_taproot_sk(response.local_pk, pk)
foliage_sig_taproot: G2Element = AugSchemeMPL.sign(taproot_sk, foliage_block_data_hash, agg_pk)
foliage_transaction_block_sig_taproot: G2Element = AugSchemeMPL.sign(
taproot_sk, foliage_transaction_block_hash, agg_pk
)
else:
foliage_sig_taproot = G2Element()
foliage_transaction_block_sig_taproot = G2Element()
foliage_sig_farmer = AugSchemeMPL.sign(sk, foliage_block_data_hash, agg_pk)
foliage_transaction_block_sig_farmer = AugSchemeMPL.sign(sk, foliage_transaction_block_hash, agg_pk)
foliage_agg_sig = AugSchemeMPL.aggregate([foliage_sig_harvester, foliage_sig_farmer])
foliage_agg_sig = AugSchemeMPL.aggregate(
[foliage_sig_harvester, foliage_sig_farmer, foliage_sig_taproot]
)
foliage_block_agg_sig = AugSchemeMPL.aggregate(
[foliage_transaction_block_sig_harvester, foliage_transaction_block_sig_farmer]
[
foliage_transaction_block_sig_harvester,
foliage_transaction_block_sig_farmer,
foliage_transaction_block_sig_taproot,
]
)
assert AugSchemeMPL.verify(agg_pk, foliage_block_data_hash, foliage_agg_sig)
assert AugSchemeMPL.verify(agg_pk, foliage_transaction_block_hash, foliage_block_agg_sig)
@ -237,12 +405,33 @@ class FarmerAPI:
@api_request
async def new_signage_point(self, new_signage_point: farmer_protocol.NewSignagePoint):
pool_difficulties: List[PoolDifficulty] = []
for p2_singleton_puzzle_hash, pool_dict in self.farmer.pool_state.items():
if pool_dict["pool_config"].pool_url == "":
# Self pooling
continue
if pool_dict["current_difficulty"] is None:
self.farmer.log.warning(
f"No pool specific difficulty has been set for {p2_singleton_puzzle_hash}, "
f"check communication with the pool, skipping this signage point, pool: "
f"{pool_dict['pool_config'].pool_url} "
)
continue
pool_difficulties.append(
PoolDifficulty(
pool_dict["current_difficulty"],
self.farmer.constants.POOL_SUB_SLOT_ITERS,
p2_singleton_puzzle_hash,
)
)
message = harvester_protocol.NewSignagePointHarvester(
new_signage_point.challenge_hash,
new_signage_point.difficulty,
new_signage_point.sub_slot_iters,
new_signage_point.signage_point_index,
new_signage_point.challenge_chain_sp,
pool_difficulties,
)
msg = make_msg(ProtocolMessageTypes.new_signage_point_harvester, message)
@ -291,3 +480,7 @@ class FarmerAPI:
}
},
)
@api_request
async def respond_plots(self, _: harvester_protocol.RespondPlots):
self.farmer.log.warning("Respond plots came too late")

View File

@ -101,6 +101,7 @@ class FullNode:
self.sync_store = None
self.signage_point_times = [time.time() for _ in range(self.constants.NUM_SPS_SUB_SLOT)]
self.full_node_store = FullNodeStore(self.constants)
self.uncompact_task = None
self.log = logging.getLogger(name if name else __name__)
@ -148,7 +149,6 @@ class FullNode:
assert len(pending_tx) == 0 # no pending transactions when starting up
peak: Optional[BlockRecord] = self.blockchain.get_peak()
self.uncompact_task = None
if peak is not None:
full_peak = await self.blockchain.get_full_peak()
await self.peak_post_processing(full_peak, peak, max(peak.height - 1, 0), None)

View File

@ -23,6 +23,7 @@ from chia.types.full_block import FullBlock
from chia.types.generator_types import CompressorArg
from chia.types.unfinished_block import UnfinishedBlock
from chia.util.ints import uint8, uint32, uint64, uint128
from chia.util.lru_cache import LRUCache
log = logging.getLogger(__name__)
@ -61,6 +62,10 @@ class FullNodeStore:
# This stores the time that each key was added to the future cache, so we can clear old keys
future_cache_key_times: Dict[bytes32, int]
# These recent caches are for pooling support
recent_signage_points: LRUCache
recent_eos: LRUCache
# Partial hashes of unfinished blocks we are requesting
requesting_unfinished_blocks: Set[bytes32]
@ -80,6 +85,8 @@ class FullNodeStore:
self.future_eos_cache = {}
self.future_sp_cache = {}
self.future_ip_cache = {}
self.recent_signage_points = LRUCache(500)
self.recent_eos = LRUCache(50)
self.requesting_unfinished_blocks = set()
self.previous_generator = None
self.future_cache_key_times = {}
@ -426,6 +433,9 @@ class FullNodeStore:
self.finished_sub_slots.append((eos, [None] * self.constants.NUM_SPS_SUB_SLOT, total_iters))
new_cc_hash = eos.challenge_chain.get_hash()
self.recent_eos.put(new_cc_hash, (eos, time.time()))
new_ips: List[timelord_protocol.NewInfusionPointVDF] = []
for ip in self.future_ip_cache.get(eos.reward_chain.get_hash(), []):
new_ips.append(ip)
@ -566,6 +576,7 @@ class FullNodeStore:
return False
sp_arr[index] = signage_point
self.recent_signage_points.put(signage_point.cc_vdf.output.get_hash(), (signage_point, time.time()))
return True
self.add_to_future_sp(signage_point, index)
return False
@ -728,6 +739,10 @@ class FullNodeStore:
self.future_sp_cache.pop(peak.reward_infusion_new_challenge, [])
self.future_ip_cache.pop(peak.reward_infusion_new_challenge, [])
for eos_op, _, _ in self.finished_sub_slots:
if eos_op is not None:
self.recent_eos.put(eos_op.challenge_chain.get_hash(), (eos_op, time.time()))
return new_eos, new_sps, new_ips
def get_finished_sub_slots(

View File

@ -21,7 +21,7 @@ log = logging.getLogger(__name__)
def create_block_generator(
generator: SerializedProgram, block_heights_list: List[uint32], generator_block_cache: GeneratorBlockCacheInterface
) -> Optional[BlockGenerator]:
""" `create_block_generator` will returns None if it fails to look up any referenced block """
"""`create_block_generator` will returns None if it fails to look up any referenced block"""
generator_arg_list: List[GeneratorArg] = []
for i in block_heights_list:
previous_generator = generator_block_cache.get_generator_for_block_height(i)

View File

@ -1,3 +1,4 @@
import logging
import time
from typing import Tuple, Dict, List, Optional, Set
from clvm import SExp
@ -27,17 +28,22 @@ def mempool_assert_announcement(condition: ConditionWithArgs, announcements: Set
Check if an announcement is included in the list of announcements
"""
announcement_hash = bytes32(condition.vars[0])
if announcement_hash not in announcements:
return Err.ASSERT_ANNOUNCE_CONSUMED_FAILED
return None
log = logging.getLogger(__name__)
def mempool_assert_my_coin_id(condition: ConditionWithArgs, unspent: CoinRecord) -> Optional[Err]:
"""
Checks if CoinID matches the id from the condition
"""
if unspent.coin.name() != condition.vars[0]:
log.warning(f"My name: {unspent.coin.name()} got: {condition.vars[0].hex()}")
return Err.ASSERT_MY_COIN_ID_FAILED
return None

View File

@ -300,6 +300,11 @@ class MempoolManager:
for name in removal_names:
removal_record = await self.coin_store.get_coin_record(name)
if removal_record is None and name not in additions_dict:
log.error(
"MempoolInclusionStatus.FAILED, Err.UNKNOWN_UNSPENT:\n"
f"COIN ID: {name}\nNPC RESULT: {npc_result}\nSPEND: {new_spend}"
)
new_spend.debug()
return None, MempoolInclusionStatus.FAILED, Err.UNKNOWN_UNSPENT
elif name in additions_dict:
removal_coin = additions_dict[name]

View File

@ -84,7 +84,8 @@ class Harvester:
{
"filename": str(path),
"size": prover.get_size(),
"plot-seed": prover.get_id(),
"plot-seed": prover.get_id(), # Deprecated
"plot_id": prover.get_id(),
"pool_public_key": plot_info.pool_public_key,
"pool_contract_puzzle_hash": plot_info.pool_contract_puzzle_hash,
"plot_public_key": plot_info.plot_public_key,

View File

@ -3,13 +3,14 @@ import time
from pathlib import Path
from typing import Callable, List, Tuple
from blspy import AugSchemeMPL, G2Element
from blspy import AugSchemeMPL, G2Element, G1Element
from chia.consensus.pot_iterations import calculate_iterations_quality, calculate_sp_interval_iters
from chia.harvester.harvester import Harvester
from chia.plotting.plot_tools import PlotInfo, parse_plot_info
from chia.protocols import harvester_protocol
from chia.protocols.farmer_protocol import FarmingInfo
from chia.protocols.harvester_protocol import Plot
from chia.protocols.protocol_message_types import ProtocolMessageTypes
from chia.server.outbound_message import make_msg
from chia.server.ws_connection import WSChiaConnection
@ -98,18 +99,27 @@ class HarvesterAPI:
responses: List[Tuple[bytes32, ProofOfSpace]] = []
if quality_strings is not None:
difficulty = new_challenge.difficulty
sub_slot_iters = new_challenge.sub_slot_iters
if plot_info.pool_contract_puzzle_hash is not None:
# If we are pooling, override the difficulty and sub slot iters with the pool threshold info.
# This will mean more proofs actually get found, but they are only submitted to the pool,
# not the blockchain
for pool_difficulty in new_challenge.pool_difficulties:
if pool_difficulty.pool_contract_puzzle_hash == plot_info.pool_contract_puzzle_hash:
difficulty = pool_difficulty.difficulty
sub_slot_iters = pool_difficulty.sub_slot_iters
# Found proofs of space (on average 1 is expected per plot)
for index, quality_str in enumerate(quality_strings):
required_iters: uint64 = calculate_iterations_quality(
self.harvester.constants.DIFFICULTY_CONSTANT_FACTOR,
quality_str,
plot_info.prover.get_size(),
new_challenge.difficulty,
difficulty,
new_challenge.sp_hash,
)
sp_interval_iters = calculate_sp_interval_iters(
self.harvester.constants, new_challenge.sub_slot_iters
)
sp_interval_iters = calculate_sp_interval_iters(self.harvester.constants, sub_slot_iters)
if required_iters < sp_interval_iters:
# Found a very good proof of space! will fetch the whole proof from disk,
# then send to farmer
@ -130,8 +140,9 @@ class HarvesterAPI:
local_master_sk,
) = parse_plot_info(plot_info.prover.get_memo())
local_sk = master_sk_to_local_sk(local_master_sk)
include_taproot = plot_info.pool_contract_puzzle_hash is not None
plot_public_key = ProofOfSpace.generate_plot_public_key(
local_sk.get_g1(), farmer_public_key
local_sk.get_g1(), farmer_public_key, include_taproot
)
responses.append(
(
@ -251,7 +262,13 @@ class HarvesterAPI:
) = parse_plot_info(plot_info.prover.get_memo())
local_sk = master_sk_to_local_sk(local_master_sk)
agg_pk = ProofOfSpace.generate_plot_public_key(local_sk.get_g1(), farmer_public_key)
if isinstance(pool_public_key_or_puzzle_hash, G1Element):
include_taproot = False
else:
assert isinstance(pool_public_key_or_puzzle_hash, bytes32)
include_taproot = True
agg_pk = ProofOfSpace.generate_plot_public_key(local_sk.get_g1(), farmer_public_key, include_taproot)
# This is only a partial signature. When combined with the farmer's half, it will
# form a complete PrependSignature.
@ -270,3 +287,24 @@ class HarvesterAPI:
)
return make_msg(ProtocolMessageTypes.respond_signatures, response)
@api_request
async def request_plots(self, _: harvester_protocol.RequestPlots):
plots_response = []
plots, failed_to_open_filenames, no_key_filenames = self.harvester.get_plots()
for plot in plots:
plots_response.append(
Plot(
plot["filename"],
plot["size"],
plot["plot_id"],
plot["pool_public_key"],
plot["pool_contract_puzzle_hash"],
plot["plot_public_key"],
plot["file_size"],
plot["time_modified"],
)
)
response = harvester_protocol.RespondPlots(plots_response, failed_to_open_filenames, no_key_filenames)
return make_msg(ProtocolMessageTypes.respond_plots, response)

View File

@ -113,7 +113,11 @@ def create_plots(args, root_path, use_datetime=True, test_private_keys: Optional
sk = AugSchemeMPL.key_gen(token_bytes(32))
# The plot public key is the combination of the harvester and farmer keys
plot_public_key = ProofOfSpace.generate_plot_public_key(master_sk_to_local_sk(sk).get_g1(), farmer_public_key)
# New plots will also include a taproot of the keys, for extensibility
include_taproot: bool = pool_contract_puzzle_hash is not None
plot_public_key = ProofOfSpace.generate_plot_public_key(
master_sk_to_local_sk(sk).get_g1(), farmer_public_key, include_taproot
)
# The plot id is based on the harvester, farmer, and pool keys
if pool_public_key is not None:

View File

@ -234,7 +234,10 @@ def load_plots(
stat_info = filename.stat()
local_sk = master_sk_to_local_sk(local_master_sk)
plot_public_key: G1Element = ProofOfSpace.generate_plot_public_key(local_sk.get_g1(), farmer_public_key)
plot_public_key: G1Element = ProofOfSpace.generate_plot_public_key(
local_sk.get_g1(), farmer_public_key, pool_contract_puzzle_hash is not None
)
with plot_ids_lock:
if prover.get_id() in plot_ids:

0
chia/pools/__init__.py Normal file
View File

66
chia/pools/pool_config.py Normal file
View File

@ -0,0 +1,66 @@
import logging
from dataclasses import dataclass
from pathlib import Path
from typing import List
from blspy import G1Element
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.util.byte_types import hexstr_to_bytes
from chia.util.config import load_config, save_config
from chia.util.streamable import Streamable, streamable
"""
Config example
This is what goes into the user's config file, to communicate between the wallet and the farmer processes.
pool_list:
launcher_id: ae4ef3b9bfe68949691281a015a9c16630fc8f66d48c19ca548fb80768791afa
authentication_public_key: 970e181ae45435ae696508a78012dc80548c334cf29676ea6ade7049eb9d2b9579cc30cb44c3fd68d35a250cfbc69e29
owner_public_key: 84c3fcf9d5581c1ddc702cb0f3b4a06043303b334dd993ab42b2c320ebfa98e5ce558448615b3f69638ba92cf7f43da5
payout_instructions: c2b08e41d766da4116e388357ed957d04ad754623a915f3fd65188a8746cf3e8
pool_url: localhost
p2_singleton_puzzle_hash: 2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824
target_puzzle_hash: 344587cf06a39db471d2cc027504e8688a0a67cce961253500c956c73603fd58
""" # noqa
log = logging.getLogger(__name__)
@dataclass(frozen=True)
@streamable
class PoolWalletConfig(Streamable):
launcher_id: bytes32
pool_url: str
payout_instructions: str
target_puzzle_hash: bytes32
p2_singleton_puzzle_hash: bytes32
owner_public_key: G1Element
authentication_public_key: G1Element
def load_pool_config(root_path: Path) -> List[PoolWalletConfig]:
config = load_config(root_path, "config.yaml")
ret_list: List[PoolWalletConfig] = []
if "pool_list" in config["pool"]:
for pool_config_dict in config["pool"]["pool_list"]:
try:
pool_config = PoolWalletConfig(
hexstr_to_bytes(pool_config_dict["launcher_id"]),
pool_config_dict["pool_url"],
pool_config_dict["payout_instructions"],
hexstr_to_bytes(pool_config_dict["target_puzzle_hash"]),
hexstr_to_bytes(pool_config_dict["p2_singleton_puzzle_hash"]),
G1Element.from_bytes(hexstr_to_bytes(pool_config_dict["owner_public_key"])),
G1Element.from_bytes(hexstr_to_bytes(pool_config_dict["authentication_public_key"])),
)
ret_list.append(pool_config)
except Exception as e:
log.error(f"Exception loading config: {pool_config_dict} {e}")
return ret_list
async def update_pool_config(root_path: Path, pool_config_list: List[PoolWalletConfig]):
full_config = load_config(root_path, "config.yaml")
full_config["pool"]["pool_list"] = [c.to_json_dict() for c in pool_config_list]
save_config(root_path, "config.yaml", full_config)

430
chia/pools/pool_puzzles.py Normal file
View File

@ -0,0 +1,430 @@
import logging
from typing import Tuple, List, Optional
from blspy import G1Element
from clvm.casts import int_from_bytes, int_to_bytes
from chia.clvm.singleton import SINGLETON_LAUNCHER
from chia.consensus.block_rewards import calculate_pool_reward
from chia.consensus.coinbase import pool_parent_id
from chia.pools.pool_wallet_info import PoolState, LEAVING_POOL, SELF_POOLING
from chia.types.blockchain_format.coin import Coin
from chia.types.blockchain_format.program import Program, SerializedProgram
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.types.coin_solution import CoinSolution
from chia.wallet.puzzles.load_clvm import load_clvm
from chia.wallet.puzzles.singleton_top_layer import puzzle_for_singleton
from chia.util.ints import uint32, uint64
log = logging.getLogger(__name__)
# "Full" is the outer singleton, with the inner puzzle filled in
SINGLETON_MOD = load_clvm("singleton_top_layer.clvm")
POOL_WAITING_ROOM_MOD = load_clvm("pool_waitingroom_innerpuz.clvm")
POOL_MEMBER_MOD = load_clvm("pool_member_innerpuz.clvm")
P2_SINGLETON_MOD = load_clvm("p2_singleton_or_delayed_puzhash.clvm")
POOL_OUTER_MOD = SINGLETON_MOD
POOL_MEMBER_HASH = POOL_MEMBER_MOD.get_tree_hash()
POOL_WAITING_ROOM_HASH = POOL_WAITING_ROOM_MOD.get_tree_hash()
P2_SINGLETON_HASH = P2_SINGLETON_MOD.get_tree_hash()
POOL_OUTER_MOD_HASH = POOL_OUTER_MOD.get_tree_hash()
SINGLETON_LAUNCHER_HASH = SINGLETON_LAUNCHER.get_tree_hash()
SINGLETON_MOD_HASH = POOL_OUTER_MOD_HASH
SINGLETON_MOD_HASH_HASH = Program.to(SINGLETON_MOD_HASH).get_tree_hash()
def create_waiting_room_inner_puzzle(
target_puzzle_hash: bytes32,
relative_lock_height: uint32,
owner_pubkey: G1Element,
launcher_id: bytes32,
genesis_challenge: bytes32,
delay_time: uint64,
delay_ph: bytes32,
) -> Program:
pool_reward_prefix = bytes32(genesis_challenge[:16] + b"\x00" * 16)
p2_singleton_puzzle_hash: bytes32 = launcher_id_to_p2_puzzle_hash(launcher_id, delay_time, delay_ph)
return POOL_WAITING_ROOM_MOD.curry(
target_puzzle_hash, p2_singleton_puzzle_hash, bytes(owner_pubkey), pool_reward_prefix, relative_lock_height
)
def create_pooling_inner_puzzle(
target_puzzle_hash: bytes,
pool_waiting_room_inner_hash: bytes32,
owner_pubkey: G1Element,
launcher_id: bytes32,
genesis_challenge: bytes32,
delay_time: uint64,
delay_ph: bytes32,
) -> Program:
pool_reward_prefix = bytes32(genesis_challenge[:16] + b"\x00" * 16)
p2_singleton_puzzle_hash: bytes32 = launcher_id_to_p2_puzzle_hash(launcher_id, delay_time, delay_ph)
return POOL_MEMBER_MOD.curry(
target_puzzle_hash,
p2_singleton_puzzle_hash,
bytes(owner_pubkey),
pool_reward_prefix,
pool_waiting_room_inner_hash,
)
def create_full_puzzle(inner_puzzle: Program, launcher_id: bytes32) -> Program:
return puzzle_for_singleton(launcher_id, inner_puzzle)
def create_p2_singleton_puzzle(
singleton_mod_hash: bytes,
launcher_id: bytes32,
seconds_delay: uint64,
delayed_puzzle_hash: bytes32,
) -> Program:
# curry params are SINGLETON_MOD_HASH LAUNCHER_ID LAUNCHER_PUZZLE_HASH SECONDS_DELAY DELAYED_PUZZLE_HASH
return P2_SINGLETON_MOD.curry(
singleton_mod_hash, launcher_id, SINGLETON_LAUNCHER_HASH, seconds_delay, delayed_puzzle_hash
)
def launcher_id_to_p2_puzzle_hash(launcher_id: bytes32, seconds_delay: uint64, delayed_puzzle_hash: bytes32) -> bytes32:
return create_p2_singleton_puzzle(
SINGLETON_MOD_HASH, launcher_id, int_to_bytes(seconds_delay), delayed_puzzle_hash
).get_tree_hash()
def get_delayed_puz_info_from_launcher_spend(coinsol: CoinSolution) -> Tuple[uint64, bytes32]:
extra_data = Program.from_bytes(bytes(coinsol.solution)).rest().rest().first()
# Extra data is (pool_state delayed_puz_info)
# Delayed puz info is (seconds delayed_puzzle_hash)
seconds: Optional[uint64] = None
delayed_puzzle_hash: Optional[bytes32] = None
for key, value in extra_data.as_python():
if key == b"t":
seconds = int_from_bytes(value)
if key == b"h":
delayed_puzzle_hash = bytes32(value)
assert seconds is not None
assert delayed_puzzle_hash is not None
return seconds, delayed_puzzle_hash
######################################
def get_template_singleton_inner_puzzle(inner_puzzle: Program):
r = inner_puzzle.uncurry()
if r is None:
return False
uncurried_inner_puzzle, args = r
return uncurried_inner_puzzle
def get_seconds_and_delayed_puzhash_from_p2_singleton_puzzle(puzzle: Program) -> Tuple[uint64, bytes32]:
r = puzzle.uncurry()
if r is None:
return False
inner_f, args = r
singleton_mod_hash, launcher_id, launcher_puzzle_hash, seconds_delay, delayed_puzzle_hash = list(args.as_iter())
seconds_delay = uint64(seconds_delay.as_int())
return seconds_delay, delayed_puzzle_hash.as_atom()
# Verify that a puzzle is a Pool Wallet Singleton
def is_pool_singleton_inner_puzzle(inner_puzzle: Program) -> bool:
inner_f = get_template_singleton_inner_puzzle(inner_puzzle)
return inner_f in [POOL_WAITING_ROOM_MOD, POOL_MEMBER_MOD]
def is_pool_waitingroom_inner_puzzle(inner_puzzle: Program) -> bool:
inner_f = get_template_singleton_inner_puzzle(inner_puzzle)
return inner_f in [POOL_WAITING_ROOM_MOD]
def is_pool_member_inner_puzzle(inner_puzzle: Program) -> bool:
inner_f = get_template_singleton_inner_puzzle(inner_puzzle)
return inner_f in [POOL_MEMBER_MOD]
# This spend will use the escape-type spend path for whichever state you are currently in
# If you are currently a waiting inner puzzle, then it will look at your target_state to determine the next
# inner puzzle hash to go to. The member inner puzzle is already committed to its next puzzle hash.
def create_travel_spend(
last_coin_solution: CoinSolution,
launcher_coin: Coin,
current: PoolState,
target: PoolState,
genesis_challenge: bytes32,
delay_time: uint64,
delay_ph: bytes32,
) -> Tuple[CoinSolution, Program]:
inner_puzzle: Program = pool_state_to_inner_puzzle(
current,
launcher_coin.name(),
genesis_challenge,
delay_time,
delay_ph,
)
if is_pool_member_inner_puzzle(inner_puzzle):
# inner sol is key_value_list ()
# key_value_list is:
# "ps" -> poolstate as bytes
inner_sol: Program = Program.to([[("p", bytes(target))], 0])
elif is_pool_waitingroom_inner_puzzle(inner_puzzle):
# inner sol is (spend_type, key_value_list, pool_reward_height)
destination_inner: Program = pool_state_to_inner_puzzle(
target, launcher_coin.name(), genesis_challenge, delay_time, delay_ph
)
log.warning(
f"create_travel_spend: waitingroom: target PoolState bytes:\n{bytes(target).hex()}\n"
f"{target}"
f"hash:{Program.to(bytes(target)).get_tree_hash()}"
)
# key_value_list is:
# "ps" -> poolstate as bytes
inner_sol = Program.to([1, [("p", bytes(target))], destination_inner.get_tree_hash()]) # current or target
else:
raise ValueError
current_singleton: Optional[Coin] = get_most_recent_singleton_coin_from_coin_solution(last_coin_solution)
assert current_singleton is not None
if current_singleton.parent_coin_info == launcher_coin.name():
parent_info_list = Program.to([launcher_coin.parent_coin_info, launcher_coin.amount])
else:
p = Program.from_bytes(bytes(last_coin_solution.puzzle_reveal))
last_coin_solution_inner_puzzle: Optional[Program] = get_inner_puzzle_from_puzzle(p)
assert last_coin_solution_inner_puzzle is not None
parent_info_list = Program.to(
[
last_coin_solution.coin.parent_coin_info,
last_coin_solution_inner_puzzle.get_tree_hash(),
last_coin_solution.coin.amount,
]
)
full_solution: Program = Program.to([parent_info_list, current_singleton.amount, inner_sol])
full_puzzle: Program = create_full_puzzle(inner_puzzle, launcher_coin.name())
return (
CoinSolution(
current_singleton,
SerializedProgram.from_program(full_puzzle),
SerializedProgram.from_program(full_solution),
),
inner_puzzle,
)
def create_absorb_spend(
last_coin_solution: CoinSolution,
current_state: PoolState,
launcher_coin: Coin,
height: uint32,
genesis_challenge: bytes32,
delay_time: uint64,
delay_ph: bytes32,
) -> List[CoinSolution]:
inner_puzzle: Program = pool_state_to_inner_puzzle(
current_state, launcher_coin.name(), genesis_challenge, delay_time, delay_ph
)
reward_amount: uint64 = calculate_pool_reward(height)
if is_pool_member_inner_puzzle(inner_puzzle):
# inner sol is (spend_type, pool_reward_amount, pool_reward_height, extra_data)
inner_sol: Program = Program.to([reward_amount, height])
elif is_pool_waitingroom_inner_puzzle(inner_puzzle):
# inner sol is (spend_type, destination_puzhash, pool_reward_amount, pool_reward_height, extra_data)
inner_sol = Program.to([0, reward_amount, height])
else:
raise ValueError
# full sol = (parent_info, my_amount, inner_solution)
coin: Optional[Coin] = get_most_recent_singleton_coin_from_coin_solution(last_coin_solution)
assert coin is not None
if coin.parent_coin_info == launcher_coin.name():
parent_info: Program = Program.to([launcher_coin.parent_coin_info, launcher_coin.amount])
else:
p = Program.from_bytes(bytes(last_coin_solution.puzzle_reveal))
last_coin_solution_inner_puzzle: Optional[Program] = get_inner_puzzle_from_puzzle(p)
assert last_coin_solution_inner_puzzle is not None
parent_info = Program.to(
[
last_coin_solution.coin.parent_coin_info,
last_coin_solution_inner_puzzle.get_tree_hash(),
last_coin_solution.coin.amount,
]
)
full_solution: SerializedProgram = SerializedProgram.from_program(
Program.to([parent_info, last_coin_solution.coin.amount, inner_sol])
)
full_puzzle: SerializedProgram = SerializedProgram.from_program(
create_full_puzzle(inner_puzzle, launcher_coin.name())
)
assert coin.puzzle_hash == full_puzzle.get_tree_hash()
reward_parent: bytes32 = pool_parent_id(height, genesis_challenge)
p2_singleton_puzzle: SerializedProgram = SerializedProgram.from_program(
create_p2_singleton_puzzle(SINGLETON_MOD_HASH, launcher_coin.name(), delay_time, delay_ph)
)
reward_coin: Coin = Coin(reward_parent, p2_singleton_puzzle.get_tree_hash(), reward_amount)
p2_singleton_solution: SerializedProgram = SerializedProgram.from_program(
Program.to([inner_puzzle.get_tree_hash(), reward_coin.name()])
)
assert p2_singleton_puzzle.get_tree_hash() == reward_coin.puzzle_hash
assert full_puzzle.get_tree_hash() == coin.puzzle_hash
assert get_inner_puzzle_from_puzzle(Program.from_bytes(bytes(full_puzzle))) is not None
coin_solutions = [
CoinSolution(coin, full_puzzle, full_solution),
CoinSolution(reward_coin, p2_singleton_puzzle, p2_singleton_solution),
]
return coin_solutions
def get_most_recent_singleton_coin_from_coin_solution(coin_sol: CoinSolution) -> Optional[Coin]:
additions: List[Coin] = coin_sol.additions()
for coin in additions:
if coin.amount % 2 == 1:
return coin
return None
def get_pubkey_from_member_inner_puzzle(inner_puzzle: Program) -> G1Element:
args = uncurry_pool_member_inner_puzzle(inner_puzzle)
if args is not None:
(
_inner_f,
_target_puzzle_hash,
_p2_singleton_hash,
pubkey_program,
_pool_reward_prefix,
_escape_puzzlehash,
) = args
else:
raise ValueError("Unable to extract pubkey")
pubkey = G1Element.from_bytes(pubkey_program.as_atom())
return pubkey
def uncurry_pool_member_inner_puzzle(inner_puzzle: Program): # -> Optional[Tuple[Program, Program, Program]]:
"""
Take a puzzle and return `None` if it's not a "pool member" inner puzzle, or
a triple of `mod_hash, relative_lock_height, pubkey` if it is.
"""
if not is_pool_member_inner_puzzle(inner_puzzle):
raise ValueError("Attempting to unpack a non-waitingroom inner puzzle")
r = inner_puzzle.uncurry()
if r is None:
raise ValueError("Failed to unpack inner puzzle")
inner_f, args = r
# p2_singleton_hash is the tree hash of the unique, curried P2_SINGLETON_MOD. See `create_p2_singleton_puzzle`
# escape_puzzlehash is of the unique, curried POOL_WAITING_ROOM_MOD. See `create_waiting_room_inner_puzzle`
target_puzzle_hash, p2_singleton_hash, owner_pubkey, pool_reward_prefix, escape_puzzlehash = tuple(args.as_iter())
return inner_f, target_puzzle_hash, p2_singleton_hash, owner_pubkey, pool_reward_prefix, escape_puzzlehash
def uncurry_pool_waitingroom_inner_puzzle(inner_puzzle: Program) -> Tuple[Program, Program, Program, Program]:
"""
Take a puzzle and return `None` if it's not a "pool member" inner puzzle, or
a triple of `mod_hash, relative_lock_height, pubkey` if it is.
"""
if not is_pool_waitingroom_inner_puzzle(inner_puzzle):
raise ValueError("Attempting to unpack a non-waitingroom inner puzzle")
r = inner_puzzle.uncurry()
if r is None:
raise ValueError("Failed to unpack inner puzzle")
inner_f, args = r
v = args.as_iter()
target_puzzle_hash, p2_singleton_hash, owner_pubkey, genesis_challenge, relative_lock_height = tuple(v)
return target_puzzle_hash, relative_lock_height, owner_pubkey, p2_singleton_hash
def get_inner_puzzle_from_puzzle(full_puzzle: Program) -> Optional[Program]:
p = Program.from_bytes(bytes(full_puzzle))
r = p.uncurry()
if r is None:
return None
_, args = r
_, inner_puzzle = list(args.as_iter())
if not is_pool_singleton_inner_puzzle(inner_puzzle):
return None
return inner_puzzle
def pool_state_from_extra_data(extra_data: Program) -> Optional[PoolState]:
state_bytes: Optional[bytes] = None
try:
for key, value in extra_data.as_python():
if key == b"p":
state_bytes = value
break
if state_bytes is None:
return None
return PoolState.from_bytes(state_bytes)
except TypeError as e:
log.error(f"Unexpected return from PoolWallet Smart Contract code {e}")
return None
def solution_to_extra_data(full_spend: CoinSolution) -> Optional[PoolState]:
full_solution_ser: SerializedProgram = full_spend.solution
full_solution: Program = Program.from_bytes(bytes(full_solution_ser))
if full_spend.coin.puzzle_hash == SINGLETON_LAUNCHER_HASH:
# Launcher spend
extra_data: Program = full_solution.rest().rest().first()
return pool_state_from_extra_data(extra_data)
# Not launcher spend
inner_solution: Program = full_solution.rest().rest().first()
# Spend which is not absorb, and is not the launcher
num_args = len(inner_solution.as_python())
assert num_args in (2, 3)
if num_args == 2:
# pool member
if inner_solution.rest().first().as_int() != 0:
return None
# This is referred to as p1 in the chialisp code
# spend_type is absorbing money if p1 is a cons box, spend_type is escape if p1 is an atom
# TODO: The comment above, and in the CLVM, seems wrong
extra_data = inner_solution.first()
if isinstance(extra_data.as_python(), bytes):
# Absorbing
return None
return pool_state_from_extra_data(extra_data)
else:
# pool waitingroom
if inner_solution.first().as_int() == 0:
return None
extra_data = inner_solution.rest().first()
return pool_state_from_extra_data(extra_data)
def pool_state_to_inner_puzzle(
pool_state: PoolState, launcher_id: bytes32, genesis_challenge: bytes32, delay_time: uint64, delay_ph: bytes32
) -> Program:
escaping_inner_puzzle: Program = create_waiting_room_inner_puzzle(
pool_state.target_puzzle_hash,
pool_state.relative_lock_height,
pool_state.owner_pubkey,
launcher_id,
genesis_challenge,
delay_time,
delay_ph,
)
if pool_state.state in [LEAVING_POOL, SELF_POOLING]:
return escaping_inner_puzzle
else:
return create_pooling_inner_puzzle(
pool_state.target_puzzle_hash,
escaping_inner_puzzle.get_tree_hash(),
pool_state.owner_pubkey,
launcher_id,
genesis_challenge,
delay_time,
delay_ph,
)

869
chia/pools/pool_wallet.py Normal file
View File

@ -0,0 +1,869 @@
import logging
import time
from typing import Any, Optional, Set, Tuple, List, Dict
from blspy import PrivateKey, G2Element, G1Element
from chia.consensus.block_record import BlockRecord
from chia.pools.pool_config import PoolWalletConfig, load_pool_config, update_pool_config
from chia.pools.pool_wallet_info import (
PoolWalletInfo,
PoolSingletonState,
PoolState,
FARMING_TO_POOL,
SELF_POOLING,
LEAVING_POOL,
create_pool_state,
)
from chia.protocols.pool_protocol import POOL_PROTOCOL_VERSION
from chia.types.announcement import Announcement
from chia.types.blockchain_format.coin import Coin
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.types.blockchain_format.program import Program, SerializedProgram
from chia.types.coin_record import CoinRecord
from chia.types.coin_solution import CoinSolution
from chia.types.spend_bundle import SpendBundle
from chia.pools.pool_puzzles import (
create_waiting_room_inner_puzzle,
create_full_puzzle,
SINGLETON_LAUNCHER,
create_pooling_inner_puzzle,
solution_to_extra_data,
pool_state_to_inner_puzzle,
get_most_recent_singleton_coin_from_coin_solution,
launcher_id_to_p2_puzzle_hash,
create_travel_spend,
uncurry_pool_member_inner_puzzle,
create_absorb_spend,
is_pool_member_inner_puzzle,
is_pool_waitingroom_inner_puzzle,
uncurry_pool_waitingroom_inner_puzzle,
get_delayed_puz_info_from_launcher_spend,
)
from chia.util.ints import uint8, uint32, uint64
from chia.wallet.derive_keys import (
master_sk_to_pooling_authentication_sk,
find_owner_sk,
)
from chia.wallet.sign_coin_solutions import sign_coin_solutions
from chia.wallet.transaction_record import TransactionRecord
from chia.wallet.util.wallet_types import WalletType
from chia.wallet.wallet import Wallet
from chia.wallet.wallet_info import WalletInfo
from chia.wallet.util.transaction_type import TransactionType
class PoolWallet:
MINIMUM_INITIAL_BALANCE = 1
MINIMUM_RELATIVE_LOCK_HEIGHT = 5
MAXIMUM_RELATIVE_LOCK_HEIGHT = 1000
wallet_state_manager: Any
log: logging.Logger
wallet_info: WalletInfo
target_state: Optional[PoolState]
standard_wallet: Wallet
wallet_id: int
singleton_list: List[Coin]
"""
From the user's perspective, this is not a wallet at all, but a way to control
whether their pooling-enabled plots are being self-farmed, or farmed by a pool,
and by which pool. Self-pooling and joint pooling rewards are swept into the
users' regular wallet.
If this wallet is in SELF_POOLING state, the coin ID associated with the current
pool wallet contains the rewards gained while self-farming, so care must be taken
to disallow joining a new pool while we still have money on the pooling singleton UTXO.
Pools can be joined anonymously, without an account or prior signup.
The ability to change the farm-to target prevents abuse from pools
by giving the user the ability to quickly change pools, or self-farm.
The pool is also protected, by not allowing members to cheat by quickly leaving a pool,
and claiming a block that was pledged to the pool.
The pooling protocol and smart coin prevents a user from quickly leaving a pool
by enforcing a wait time when leaving the pool. A minimum number of blocks must pass
after the user declares that they are leaving the pool, and before they can start to
self-claim rewards again.
Control of switching states is granted to the owner public key.
We reveal the inner_puzzle to the pool during setup of the pooling protocol.
The pool can prove to itself that the inner puzzle pays to the pooling address,
and it can follow state changes in the pooling puzzle by tracing destruction and
creation of coins associate with this pooling singleton (the singleton controlling
this pool group).
The user trusts the pool to send mining rewards to the <XXX address XXX>
TODO: We should mark which address is receiving funds for our current state.
If the pool misbehaves, it is the user's responsibility to leave the pool
It is the Pool's responsibility to claim the rewards sent to the pool_puzzlehash.
The timeout for leaving the pool is expressed in number of blocks from the time
the user expresses their intent to leave.
"""
@classmethod
def type(cls) -> uint8:
return uint8(WalletType.POOLING_WALLET)
def id(self):
return self.wallet_info.id
@classmethod
def _verify_self_pooled(cls, state) -> Optional[str]:
err = ""
if state.pool_url != "":
err += " Unneeded pool_url for self-pooling"
if state.relative_lock_height != 0:
err += " Incorrect relative_lock_height for self-pooling"
return None if err == "" else err
@classmethod
def _verify_pooling_state(cls, state) -> Optional[str]:
err = ""
if state.relative_lock_height < cls.MINIMUM_RELATIVE_LOCK_HEIGHT:
err += (
f" Pool relative_lock_height ({state.relative_lock_height})"
f"is less than recommended minimum ({cls.MINIMUM_RELATIVE_LOCK_HEIGHT})"
)
elif state.relative_lock_height > cls.MAXIMUM_RELATIVE_LOCK_HEIGHT:
err += (
f" Pool relative_lock_height ({state.relative_lock_height})"
f"is greater than recommended maximum ({cls.MINIMUM_RELATIVE_LOCK_HEIGHT})"
)
if state.pool_url in [None, ""]:
err += " Empty pool url in pooling state"
return err
@classmethod
def _verify_pool_state(cls, state: PoolState) -> Optional[str]:
if state.target_puzzle_hash is None:
return "Invalid puzzle_hash"
if state.version > POOL_PROTOCOL_VERSION:
return (
f"Detected pool protocol version {state.version}, which is "
f"newer than this wallet's version ({POOL_PROTOCOL_VERSION}). Please upgrade "
f"to use this pooling wallet"
)
if state.state == PoolSingletonState.SELF_POOLING:
return cls._verify_self_pooled(state)
elif state.state == PoolSingletonState.FARMING_TO_POOL or state.state == PoolSingletonState.LEAVING_POOL:
return cls._verify_pooling_state(state)
else:
return "Internal Error"
@classmethod
def _verify_initial_target_state(cls, initial_target_state):
err = cls._verify_pool_state(initial_target_state)
if err:
raise ValueError(f"Invalid internal Pool State: {err}: {initial_target_state}")
async def get_spend_history(self) -> List[Tuple[uint32, CoinSolution]]:
return self.wallet_state_manager.pool_store.get_spends_for_wallet(self.wallet_id)
async def get_current_state(self) -> PoolWalletInfo:
history: List[Tuple[uint32, CoinSolution]] = await self.get_spend_history()
all_spends: List[CoinSolution] = [cs for _, cs in history]
# We must have at least the launcher spend
assert len(all_spends) >= 1
launcher_coin: Coin = all_spends[0].coin
delayed_seconds, delayed_puzhash = get_delayed_puz_info_from_launcher_spend(all_spends[0])
tip_singleton_coin: Optional[Coin] = get_most_recent_singleton_coin_from_coin_solution(all_spends[-1])
launcher_id: bytes32 = launcher_coin.name()
p2_singleton_puzzle_hash = launcher_id_to_p2_puzzle_hash(launcher_id, delayed_seconds, delayed_puzhash)
assert tip_singleton_coin is not None
curr_spend_i = len(all_spends) - 1
extra_data: Optional[PoolState] = None
last_singleton_spend_height = uint32(0)
while extra_data is None:
full_spend: CoinSolution = all_spends[curr_spend_i]
extra_data = solution_to_extra_data(full_spend)
last_singleton_spend_height = uint32(history[curr_spend_i][0])
curr_spend_i -= 1
assert extra_data is not None
current_inner = pool_state_to_inner_puzzle(
extra_data,
launcher_coin.name(),
self.wallet_state_manager.constants.GENESIS_CHALLENGE,
delayed_seconds,
delayed_puzhash,
)
return PoolWalletInfo(
extra_data,
self.target_state,
launcher_coin,
launcher_id,
p2_singleton_puzzle_hash,
current_inner,
tip_singleton_coin.name(),
last_singleton_spend_height,
)
async def get_unconfirmed_transactions(self) -> List[TransactionRecord]:
return await self.wallet_state_manager.tx_store.get_unconfirmed_for_wallet(self.wallet_id)
async def get_tip(self) -> Tuple[uint32, CoinSolution]:
return self.wallet_state_manager.pool_store.get_spends_for_wallet(self.wallet_id)[-1]
async def update_pool_config(self, make_new_authentication_key: bool):
current_state: PoolWalletInfo = await self.get_current_state()
pool_config_list: List[PoolWalletConfig] = load_pool_config(self.wallet_state_manager.root_path)
pool_config_dict: Dict[bytes32, PoolWalletConfig] = {c.launcher_id: c for c in pool_config_list}
existing_config: Optional[PoolWalletConfig] = pool_config_dict.get(current_state.launcher_id, None)
if make_new_authentication_key or existing_config is None:
new_auth_sk: PrivateKey = master_sk_to_pooling_authentication_sk(
self.wallet_state_manager.private_key, uint32(self.wallet_id), uint32(0)
)
auth_pk: G1Element = new_auth_sk.get_g1()
payout_instructions: str = (await self.standard_wallet.get_new_puzzlehash(in_transaction=True)).hex()
else:
auth_pk = existing_config.authentication_public_key
payout_instructions = existing_config.payout_instructions
new_config: PoolWalletConfig = PoolWalletConfig(
current_state.launcher_id,
current_state.current.pool_url if current_state.current.pool_url else "",
payout_instructions,
current_state.current.target_puzzle_hash,
current_state.p2_singleton_puzzle_hash,
current_state.current.owner_pubkey,
auth_pk,
)
pool_config_dict[new_config.launcher_id] = new_config
await update_pool_config(self.wallet_state_manager.root_path, list(pool_config_dict.values()))
@staticmethod
def get_next_interesting_coin_ids(spend: CoinSolution) -> List[bytes32]:
# CoinSolution of one of the coins that we cared about. This coin was spent in a block, but might be in a reorg
# If we return a value, it is a coin ID that we are also interested in (to support two transitions per block)
coin: Optional[Coin] = get_most_recent_singleton_coin_from_coin_solution(spend)
if coin is not None:
return [coin.name()]
return []
async def apply_state_transitions(self, block_spends: List[CoinSolution], block_height: uint32):
"""
Updates the Pool state (including DB) with new singleton spends. The block spends can contain many spends
that we are not interested in, and can contain many ephemeral spends. They must all be in the same block.
The DB must be committed after calling this method. All validation should be done here.
"""
coin_name_to_spend: Dict[bytes32, CoinSolution] = {cs.coin.name(): cs for cs in block_spends}
tip: Tuple[uint32, CoinSolution] = await self.get_tip()
tip_height = tip[0]
tip_spend = tip[1]
assert block_height >= tip_height # We should not have a spend with a lesser block height
while True:
tip_coin: Optional[Coin] = get_most_recent_singleton_coin_from_coin_solution(tip_spend)
assert tip_coin is not None
spent_coin_name: bytes32 = tip_coin.name()
if spent_coin_name not in coin_name_to_spend:
break
spend: CoinSolution = coin_name_to_spend[spent_coin_name]
await self.wallet_state_manager.pool_store.add_spend(self.wallet_id, spend, block_height)
tip_spend = (await self.get_tip())[1]
self.log.info(f"New PoolWallet singleton tip_coin: {tip_spend}")
coin_name_to_spend.pop(spent_coin_name)
# If we have reached the target state, resets it to None. Loops back to get current state
for _, added_spend in reversed(self.wallet_state_manager.pool_store.get_spends_for_wallet(self.wallet_id)):
latest_state: Optional[PoolState] = solution_to_extra_data(added_spend)
if latest_state is not None:
if self.target_state == latest_state:
self.target_state = None
break
await self.update_pool_config(False)
async def rewind(self, block_height: int) -> bool:
"""
Rolls back all transactions after block_height, and if creation was after block_height, deletes the wallet.
Returns True if the wallet should be removed.
"""
try:
history: List[Tuple[uint32, CoinSolution]] = self.wallet_state_manager.pool_store.get_spends_for_wallet(
self.wallet_id
).copy()
prev_state: PoolWalletInfo = await self.get_current_state()
await self.wallet_state_manager.pool_store.rollback(block_height, self.wallet_id)
if len(history) > 0 and history[0][0] > block_height:
# If we have no entries in the DB, we have no singleton, so we should not have a wallet either
# The PoolWallet object becomes invalid after this.
await self.wallet_state_manager.interested_store.remove_interested_puzzle_hash(
prev_state.p2_singleton_puzzle_hash, in_transaction=True
)
return True
else:
if await self.get_current_state() != prev_state:
await self.update_pool_config(False)
return False
except Exception as e:
self.log.error(f"Exception rewinding: {e}")
return False
@staticmethod
async def create(
wallet_state_manager: Any,
wallet: Wallet,
launcher_coin_id: bytes32,
block_spends: List[CoinSolution],
block_height: uint32,
in_transaction: bool,
name: str = None,
):
"""
This creates a new PoolWallet with only one spend: the launcher spend. The DB MUST be committed after calling
this method.
"""
self = PoolWallet()
self.wallet_state_manager = wallet_state_manager
self.wallet_info = await wallet_state_manager.user_store.create_wallet(
"Pool wallet", WalletType.POOLING_WALLET.value, "", in_transaction=in_transaction
)
self.wallet_id = self.wallet_info.id
self.standard_wallet = wallet
self.target_state = None
self.log = logging.getLogger(name if name else __name__)
launcher_spend: Optional[CoinSolution] = None
for spend in block_spends:
if spend.coin.name() == launcher_coin_id:
launcher_spend = spend
assert launcher_spend is not None
await self.wallet_state_manager.pool_store.add_spend(self.wallet_id, launcher_spend, block_height)
await self.update_pool_config(True)
p2_puzzle_hash: bytes32 = (await self.get_current_state()).p2_singleton_puzzle_hash
await self.wallet_state_manager.interested_store.add_interested_puzzle_hash(
p2_puzzle_hash, self.wallet_id, True
)
await self.wallet_state_manager.add_new_wallet(self, self.wallet_info.id, create_puzzle_hashes=False)
self.wallet_state_manager.set_new_peak_callback(self.wallet_id, self.new_peak)
return self
@staticmethod
async def create_from_db(
wallet_state_manager: Any,
wallet: Wallet,
wallet_info: WalletInfo,
name: str = None,
):
"""
This creates a PoolWallet from DB. However, all data is already handled by WalletPoolStore, so we don't need
to do anything here.
"""
self = PoolWallet()
self.wallet_state_manager = wallet_state_manager
self.wallet_id = wallet_info.id
self.standard_wallet = wallet
self.wallet_info = wallet_info
self.target_state = None
self.log = logging.getLogger(name if name else __name__)
self.wallet_state_manager.set_new_peak_callback(self.wallet_id, self.new_peak)
return self
@staticmethod
async def create_new_pool_wallet_transaction(
wallet_state_manager: Any,
main_wallet: Wallet,
initial_target_state: PoolState,
fee: uint64 = uint64(0),
p2_singleton_delay_time: Optional[uint64] = None,
p2_singleton_delayed_ph: Optional[bytes32] = None,
) -> Tuple[TransactionRecord, bytes32, bytes32]:
"""
A "plot NFT", or pool wallet, represents the idea of a set of plots that all pay to
the same pooling puzzle. This puzzle is a `chia singleton` that is
parameterized with a public key controlled by the user's wallet
(a `smart coin`). It contains an inner puzzle that can switch between
paying block rewards to a pool, or to a user's own wallet.
Call under the wallet state manger lock
"""
amount = 1
standard_wallet = main_wallet
if p2_singleton_delayed_ph is None:
p2_singleton_delayed_ph = await main_wallet.get_new_puzzlehash()
if p2_singleton_delay_time is None:
p2_singleton_delay_time = uint64(604800)
unspent_records = await wallet_state_manager.coin_store.get_unspent_coins_for_wallet(standard_wallet.wallet_id)
balance = await standard_wallet.get_confirmed_balance(unspent_records)
if balance < PoolWallet.MINIMUM_INITIAL_BALANCE:
raise ValueError("Not enough balance in main wallet to create a managed plotting pool.")
if balance < fee:
raise ValueError("Not enough balance in main wallet to create a managed plotting pool with fee {fee}.")
# Verify Parameters - raise if invalid
PoolWallet._verify_initial_target_state(initial_target_state)
spend_bundle, singleton_puzzle_hash, launcher_coin_id = await PoolWallet.generate_launcher_spend(
standard_wallet,
uint64(1),
initial_target_state,
wallet_state_manager.constants.GENESIS_CHALLENGE,
p2_singleton_delay_time,
p2_singleton_delayed_ph,
)
if spend_bundle is None:
raise ValueError("failed to generate ID for wallet")
standard_wallet_record = TransactionRecord(
confirmed_at_height=uint32(0),
created_at_time=uint64(int(time.time())),
to_puzzle_hash=singleton_puzzle_hash,
amount=uint64(amount),
fee_amount=uint64(0),
confirmed=False,
sent=uint32(0),
spend_bundle=spend_bundle,
additions=spend_bundle.additions(),
removals=spend_bundle.removals(),
wallet_id=wallet_state_manager.main_wallet.id(),
sent_to=[],
trade_id=None,
type=uint32(TransactionType.OUTGOING_TX.value),
name=spend_bundle.name(),
)
await standard_wallet.push_transaction(standard_wallet_record)
p2_singleton_puzzle_hash: bytes32 = launcher_id_to_p2_puzzle_hash(
launcher_coin_id, p2_singleton_delay_time, p2_singleton_delayed_ph
)
return standard_wallet_record, p2_singleton_puzzle_hash, launcher_coin_id
async def sign(self, coin_solution: CoinSolution) -> SpendBundle:
async def pk_to_sk(pk: G1Element) -> PrivateKey:
owner_sk: Optional[PrivateKey] = await find_owner_sk([self.wallet_state_manager.private_key], pk)
assert owner_sk is not None
return owner_sk
return await sign_coin_solutions(
[coin_solution],
pk_to_sk,
self.wallet_state_manager.constants.AGG_SIG_ME_ADDITIONAL_DATA,
self.wallet_state_manager.constants.MAX_BLOCK_COST_CLVM,
)
async def generate_travel_transaction(self) -> TransactionRecord:
# target_state is contained within pool_wallet_state
pool_wallet_info: PoolWalletInfo = await self.get_current_state()
spend_history = await self.get_spend_history()
last_coin_solution: CoinSolution = spend_history[-1][1]
delayed_seconds, delayed_puzhash = get_delayed_puz_info_from_launcher_spend(spend_history[0][1])
assert pool_wallet_info.target is not None
next_state = pool_wallet_info.target
if pool_wallet_info.current.state in [FARMING_TO_POOL]:
next_state = create_pool_state(
LEAVING_POOL,
pool_wallet_info.current.target_puzzle_hash,
pool_wallet_info.current.owner_pubkey,
pool_wallet_info.current.pool_url,
pool_wallet_info.current.relative_lock_height,
)
new_inner_puzzle = pool_state_to_inner_puzzle(
next_state,
pool_wallet_info.launcher_coin.name(),
self.wallet_state_manager.constants.GENESIS_CHALLENGE,
delayed_seconds,
delayed_puzhash,
)
new_full_puzzle: SerializedProgram = SerializedProgram.from_program(
create_full_puzzle(new_inner_puzzle, pool_wallet_info.launcher_coin.name())
)
outgoing_coin_solution, inner_puzzle = create_travel_spend(
last_coin_solution,
pool_wallet_info.launcher_coin,
pool_wallet_info.current,
next_state,
self.wallet_state_manager.constants.GENESIS_CHALLENGE,
delayed_seconds,
delayed_puzhash,
)
tip = (await self.get_tip())[1]
tip_coin = tip.coin
singleton = tip.additions()[0]
singleton_id = singleton.name()
assert outgoing_coin_solution.coin.parent_coin_info == tip_coin.name()
assert outgoing_coin_solution.coin.name() == singleton_id
assert new_inner_puzzle != inner_puzzle
if is_pool_member_inner_puzzle(inner_puzzle):
(
inner_f,
target_puzzle_hash,
p2_singleton_hash,
pubkey_as_program,
pool_reward_prefix,
escape_puzzle_hash,
) = uncurry_pool_member_inner_puzzle(inner_puzzle)
pk_bytes: bytes = bytes(pubkey_as_program.as_atom())
assert len(pk_bytes) == 48
owner_pubkey = G1Element.from_bytes(pk_bytes)
assert owner_pubkey == pool_wallet_info.current.owner_pubkey
elif is_pool_waitingroom_inner_puzzle(inner_puzzle):
(
target_puzzle_hash, # payout_puzzle_hash
relative_lock_height,
owner_pubkey,
p2_singleton_hash,
) = uncurry_pool_waitingroom_inner_puzzle(inner_puzzle)
pk_bytes = bytes(owner_pubkey.as_atom())
assert len(pk_bytes) == 48
assert owner_pubkey == pool_wallet_info.current.owner_pubkey
else:
raise RuntimeError("Invalid state")
signed_spend_bundle = await self.sign(outgoing_coin_solution)
assert signed_spend_bundle.removals()[0].puzzle_hash == singleton.puzzle_hash
assert signed_spend_bundle.removals()[0].name() == singleton.name()
assert signed_spend_bundle is not None
tx_record = TransactionRecord(
confirmed_at_height=uint32(0),
created_at_time=uint64(int(time.time())),
to_puzzle_hash=new_full_puzzle.get_tree_hash(),
amount=uint64(1),
fee_amount=uint64(0),
confirmed=False,
sent=uint32(0),
spend_bundle=signed_spend_bundle,
additions=signed_spend_bundle.additions(),
removals=signed_spend_bundle.removals(),
wallet_id=self.id(),
sent_to=[],
trade_id=None,
type=uint32(TransactionType.OUTGOING_TX.value),
name=signed_spend_bundle.name(),
)
return tx_record
@staticmethod
async def generate_launcher_spend(
standard_wallet: Wallet,
amount: uint64,
initial_target_state: PoolState,
genesis_challenge: bytes32,
delay_time: uint64,
delay_ph: bytes32,
) -> Tuple[SpendBundle, bytes32, bytes32]:
"""
Creates the initial singleton, which includes spending an origin coin, the launcher, and creating a singleton
with the "pooling" inner state, which can be either self pooling or using a pool
"""
coins: Set[Coin] = await standard_wallet.select_coins(amount)
if coins is None:
raise ValueError("Not enough coins to create pool wallet")
assert len(coins) == 1
launcher_parent: Coin = coins.copy().pop()
genesis_launcher_puz: Program = SINGLETON_LAUNCHER
launcher_coin: Coin = Coin(launcher_parent.name(), genesis_launcher_puz.get_tree_hash(), amount)
escaping_inner_puzzle: bytes32 = create_waiting_room_inner_puzzle(
initial_target_state.target_puzzle_hash,
initial_target_state.relative_lock_height,
initial_target_state.owner_pubkey,
launcher_coin.name(),
genesis_challenge,
delay_time,
delay_ph,
)
escaping_inner_puzzle_hash = escaping_inner_puzzle.get_tree_hash()
self_pooling_inner_puzzle: Program = create_pooling_inner_puzzle(
initial_target_state.target_puzzle_hash,
escaping_inner_puzzle_hash,
initial_target_state.owner_pubkey,
launcher_coin.name(),
genesis_challenge,
delay_time,
delay_ph,
)
if initial_target_state.state == SELF_POOLING:
puzzle = escaping_inner_puzzle
elif initial_target_state.state == FARMING_TO_POOL:
puzzle = self_pooling_inner_puzzle
else:
raise ValueError("Invalid initial state")
full_pooling_puzzle: Program = create_full_puzzle(puzzle, launcher_id=launcher_coin.name())
puzzle_hash: bytes32 = full_pooling_puzzle.get_tree_hash()
extra_data_bytes = Program.to([("p", bytes(initial_target_state)), ("t", delay_time), ("h", delay_ph)])
announcement_set: Set[Announcement] = set()
announcement_message = Program.to([puzzle_hash, amount, extra_data_bytes]).get_tree_hash()
announcement_set.add(Announcement(launcher_coin.name(), announcement_message).name())
create_launcher_tx_record: Optional[TransactionRecord] = await standard_wallet.generate_signed_transaction(
amount,
genesis_launcher_puz.get_tree_hash(),
uint64(0),
None,
coins,
None,
False,
announcement_set,
)
assert create_launcher_tx_record is not None and create_launcher_tx_record.spend_bundle is not None
genesis_launcher_solution: Program = Program.to([puzzle_hash, amount, extra_data_bytes])
launcher_cs: CoinSolution = CoinSolution(
launcher_coin,
SerializedProgram.from_program(genesis_launcher_puz),
SerializedProgram.from_program(genesis_launcher_solution),
)
launcher_sb: SpendBundle = SpendBundle([launcher_cs], G2Element())
# Current inner will be updated when state is verified on the blockchain
full_spend: SpendBundle = SpendBundle.aggregate([create_launcher_tx_record.spend_bundle, launcher_sb])
return full_spend, puzzle_hash, launcher_coin.name()
async def join_pool(self, target_state: PoolState):
if target_state.state != FARMING_TO_POOL:
raise ValueError(f"join_pool must be called with target_state={FARMING_TO_POOL} (FARMING_TO_POOL)")
if self.target_state is not None:
raise ValueError(f"Cannot join a pool while waiting for target state: {self.target_state}")
if await self.have_unconfirmed_transaction():
raise ValueError(
"Cannot claim due to unconfirmed transaction. If this is stuck, delete the unconfirmed transaction."
)
current_state: PoolWalletInfo = await self.get_current_state()
if current_state.current == target_state:
self.target_state = None
self.log.info("Asked to change to current state. Target = {target_state}")
return
if self.target_state is not None:
raise ValueError(
f"Cannot change to state {target_state} when already having target state: {self.target_state}"
)
PoolWallet._verify_initial_target_state(target_state)
if current_state.current.state == LEAVING_POOL:
history: List[Tuple[uint32, CoinSolution]] = await self.get_spend_history()
last_height: uint32 = history[-1][0]
if self.wallet_state_manager.get_peak().height <= last_height + current_state.current.relative_lock_height:
raise ValueError(
f"Cannot join a pool until height {last_height + current_state.current.relative_lock_height}"
)
self.target_state = target_state
tx_record: TransactionRecord = await self.generate_travel_transaction()
await self.wallet_state_manager.add_pending_transaction(tx_record)
return tx_record
async def self_pool(self):
if await self.have_unconfirmed_transaction():
raise ValueError(
"Cannot claim due to unconfirmed transaction. If this is stuck, delete the unconfirmed transaction."
)
pool_wallet_info: PoolWalletInfo = await self.get_current_state()
if pool_wallet_info.current.state == SELF_POOLING:
raise ValueError("Attempted to self pool when already self pooling")
if self.target_state is not None:
raise ValueError(f"Cannot self pool when already having target state: {self.target_state}")
# Note the implications of getting owner_puzzlehash from our local wallet right now
# vs. having pre-arranged the target self-pooling address
owner_puzzlehash = await self.standard_wallet.get_new_puzzlehash()
owner_pubkey = pool_wallet_info.current.owner_pubkey
current_state: PoolWalletInfo = await self.get_current_state()
if current_state.current.state == LEAVING_POOL:
history: List[Tuple[uint32, CoinSolution]] = await self.get_spend_history()
last_height: uint32 = history[-1][0]
if self.wallet_state_manager.get_peak().height <= last_height + current_state.current.relative_lock_height:
raise ValueError(
f"Cannot self pool until height {last_height + current_state.current.relative_lock_height}"
)
self.target_state = create_pool_state(
SELF_POOLING, owner_puzzlehash, owner_pubkey, pool_url=None, relative_lock_height=uint32(0)
)
tx_record = await self.generate_travel_transaction()
await self.wallet_state_manager.add_pending_transaction(tx_record)
return tx_record
async def claim_pool_rewards(self, fee: uint64) -> TransactionRecord:
# Search for p2_puzzle_hash coins, and spend them with the singleton
if await self.have_unconfirmed_transaction():
raise ValueError(
"Cannot claim due to unconfirmed transaction. If this is stuck, delete the unconfirmed transaction."
)
unspent_coin_records: List[CoinRecord] = list(
await self.wallet_state_manager.coin_store.get_unspent_coins_for_wallet(self.wallet_id)
)
if len(unspent_coin_records) == 0:
raise ValueError("Nothing to claim")
farming_rewards: List[TransactionRecord] = await self.wallet_state_manager.tx_store.get_farming_rewards()
coin_to_height_farmed: Dict[Coin, uint32] = {}
for tx_record in farming_rewards:
height_farmed: Optional[uint32] = tx_record.height_farmed(
self.wallet_state_manager.constants.GENESIS_CHALLENGE
)
assert height_farmed is not None
coin_to_height_farmed[tx_record.additions[0]] = height_farmed
history: List[Tuple[uint32, CoinSolution]] = await self.get_spend_history()
assert len(history) > 0
delayed_seconds, delayed_puzhash = get_delayed_puz_info_from_launcher_spend(history[0][1])
current_state: PoolWalletInfo = await self.get_current_state()
last_solution: CoinSolution = history[-1][1]
all_spends: List[CoinSolution] = []
total_amount = 0
for coin_record in unspent_coin_records:
if len(all_spends) >= 100:
# Limit the total number of spends, so it fits into the block
break
absorb_spend: List[CoinSolution] = create_absorb_spend(
last_solution,
current_state.current,
current_state.launcher_coin,
coin_to_height_farmed[coin_record.coin],
self.wallet_state_manager.constants.GENESIS_CHALLENGE,
delayed_seconds,
delayed_puzhash,
)
last_solution = absorb_spend[0]
all_spends += absorb_spend
total_amount += coin_record.coin.amount
self.log.info(
f"Farmer coin: {coin_record.coin} {coin_record.coin.name()} {coin_to_height_farmed[coin_record.coin]}"
)
# No signatures are required to absorb
spend_bundle: SpendBundle = SpendBundle(all_spends, G2Element())
absorb_transaction: TransactionRecord = TransactionRecord(
confirmed_at_height=uint32(0),
created_at_time=uint64(int(time.time())),
to_puzzle_hash=current_state.current.target_puzzle_hash,
amount=uint64(total_amount),
fee_amount=uint64(0),
confirmed=False,
sent=uint32(0),
spend_bundle=spend_bundle,
additions=spend_bundle.additions(),
removals=spend_bundle.removals(),
wallet_id=uint32(self.wallet_id),
sent_to=[],
trade_id=None,
type=uint32(TransactionType.OUTGOING_TX.value),
name=spend_bundle.name(),
)
await self.wallet_state_manager.add_pending_transaction(absorb_transaction)
return absorb_transaction
async def new_peak(self, peak: BlockRecord) -> None:
# This gets called from the WalletStateManager whenever there is a new peak
pool_wallet_info: PoolWalletInfo = await self.get_current_state()
tip_height, tip_spend = await self.get_tip()
if self.target_state is None:
return
if self.target_state == pool_wallet_info.current.state:
self.target_state = None
raise ValueError("Internal error")
if (
self.target_state.state in [FARMING_TO_POOL, SELF_POOLING]
and pool_wallet_info.current.state == LEAVING_POOL
):
leave_height = tip_height + pool_wallet_info.current.relative_lock_height
curr: BlockRecord = peak
while not curr.is_transaction_block:
curr = self.wallet_state_manager.blockchain.block_record(curr.prev_hash)
self.log.info(f"Last transaction block height: {curr.height} OK to leave at height {leave_height}")
# Add some buffer (+2) to reduce chances of a reorg
if curr.height > leave_height + 2:
unconfirmed: List[
TransactionRecord
] = await self.wallet_state_manager.tx_store.get_unconfirmed_for_wallet(self.wallet_id)
next_tip: Optional[Coin] = get_most_recent_singleton_coin_from_coin_solution(tip_spend)
assert next_tip is not None
if any([rem.name() == next_tip.name() for tx_rec in unconfirmed for rem in tx_rec.removals]):
self.log.info("Already submitted second transaction, will not resubmit.")
return
self.log.info(f"Attempting to leave from\n{pool_wallet_info.current}\nto\n{self.target_state}")
assert self.target_state.version == POOL_PROTOCOL_VERSION
assert pool_wallet_info.current.state == LEAVING_POOL
assert self.target_state.target_puzzle_hash is not None
if self.target_state.state == SELF_POOLING:
assert self.target_state.relative_lock_height == 0
assert self.target_state.pool_url is None
elif self.target_state.state == FARMING_TO_POOL:
assert self.target_state.relative_lock_height >= self.MINIMUM_RELATIVE_LOCK_HEIGHT
assert self.target_state.pool_url is not None
tx_record = await self.generate_travel_transaction()
await self.wallet_state_manager.add_pending_transaction(tx_record)
async def have_unconfirmed_transaction(self) -> bool:
unconfirmed: List[TransactionRecord] = await self.wallet_state_manager.tx_store.get_unconfirmed_for_wallet(
self.wallet_id
)
return len(unconfirmed) > 0
async def get_confirmed_balance(self, record_list=None) -> uint64:
if (await self.get_current_state()).current.state == SELF_POOLING:
return await self.wallet_state_manager.get_confirmed_balance_for_wallet(self.wallet_id, record_list)
else:
return uint64(0)
async def get_unconfirmed_balance(self, record_list=None) -> uint64:
return await self.get_confirmed_balance(record_list)
async def get_spendable_balance(self, record_list=None) -> uint64:
return await self.get_confirmed_balance(record_list)
async def get_pending_change_balance(self) -> uint64:
return uint64(0)
async def get_max_send_amount(self, record_list=None) -> uint64:
return uint64(0)

View File

@ -0,0 +1,115 @@
from dataclasses import dataclass
from enum import IntEnum
from typing import Optional, Dict
from blspy import G1Element
from chia.protocols.pool_protocol import POOL_PROTOCOL_VERSION
from chia.types.blockchain_format.coin import Coin
from chia.types.blockchain_format.program import Program
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.util.byte_types import hexstr_to_bytes
from chia.util.ints import uint32, uint8
from chia.util.streamable import streamable, Streamable
class PoolSingletonState(IntEnum):
"""
From the user's point of view, a pool group can be in these states:
`SELF_POOLING`: The singleton exists on the blockchain, and we are farming
block rewards to a wallet address controlled by the user
`LEAVING_POOL`: The singleton exists, and we have entered the "escaping" state, which
means we are waiting for a number of blocks = `relative_lock_height` to pass, so we can leave.
`FARMING_TO_POOL`: The singleton exists, and it is assigned to a pool.
`CLAIMING_SELF_POOLED_REWARDS`: We have submitted a transaction to sweep our
self-pooled funds.
"""
SELF_POOLING = 1
LEAVING_POOL = 2
FARMING_TO_POOL = 3
SELF_POOLING = PoolSingletonState.SELF_POOLING
LEAVING_POOL = PoolSingletonState.LEAVING_POOL
FARMING_TO_POOL = PoolSingletonState.FARMING_TO_POOL
@dataclass(frozen=True)
@streamable
class PoolState(Streamable):
"""
`PoolState` is a type that is serialized to the blockchain to track the state of the user's pool singleton
`target_puzzle_hash` is either the pool address, or the self-pooling address that pool rewards will be paid to.
`target_puzzle_hash` is NOT the p2_singleton puzzle that block rewards are sent to.
The `p2_singleton` address is the initial address, and the `target_puzzle_hash` is the final destination.
`relative_lock_height` is zero when in SELF_POOLING state
"""
version: uint8
state: uint8 # PoolSingletonState
# `target_puzzle_hash`: A puzzle_hash we pay to
# When self-farming, this is a main wallet address
# When farming-to-pool, the pool sends this to the farmer during pool protocol setup
target_puzzle_hash: bytes32 # TODO: rename target_puzzle_hash -> pay_to_address
# owner_pubkey is set by the wallet, once
owner_pubkey: G1Element
pool_url: Optional[str]
relative_lock_height: uint32
def initial_pool_state_from_dict(state_dict: Dict, owner_pubkey: G1Element, owner_puzzle_hash: bytes32) -> PoolState:
state_str = state_dict["state"]
singleton_state: PoolSingletonState = PoolSingletonState[state_str]
if singleton_state == SELF_POOLING:
target_puzzle_hash = owner_puzzle_hash
pool_url: str = ""
relative_lock_height = uint32(0)
elif singleton_state == FARMING_TO_POOL:
target_puzzle_hash = bytes32(hexstr_to_bytes(state_dict["target_puzzle_hash"]))
pool_url = state_dict["pool_url"]
relative_lock_height = uint32(state_dict["relative_lock_height"])
else:
raise ValueError("Initial state must be SELF_POOLING or FARMING_TO_POOL")
# TODO: change create_pool_state to return error messages, as well
assert relative_lock_height is not None
return create_pool_state(singleton_state, target_puzzle_hash, owner_pubkey, pool_url, relative_lock_height)
def create_pool_state(
state: PoolSingletonState,
target_puzzle_hash: bytes32,
owner_pubkey: G1Element,
pool_url: Optional[str],
relative_lock_height: uint32,
) -> PoolState:
if state not in set(s.value for s in PoolSingletonState):
raise AssertionError("state {state} is not a valid PoolSingletonState,")
ps = PoolState(
POOL_PROTOCOL_VERSION, uint8(state), target_puzzle_hash, owner_pubkey, pool_url, relative_lock_height
)
# TODO Move verify here
return ps
@dataclass(frozen=True)
@streamable
class PoolWalletInfo(Streamable):
"""
Internal Pool Wallet state, not destined for the blockchain. This can be completely derived with
the Singleton's CoinSolutions list, or with the information from the WalletPoolStore.
"""
current: PoolState
target: Optional[PoolState]
launcher_coin: Coin
launcher_id: bytes32
p2_singleton_puzzle_hash: bytes32
current_inner: Program # Inner puzzle in current singleton, not revealed yet
tip_singleton_coin_id: bytes32
singleton_block_height: uint32 # Block height that current PoolState is from

View File

@ -1,5 +1,5 @@
from dataclasses import dataclass
from typing import List, Tuple
from typing import List, Tuple, Optional
from blspy import G1Element, G2Element
@ -14,6 +14,14 @@ Note: When changing this file, also change protocol_message_types.py, and the pr
"""
@dataclass(frozen=True)
@streamable
class PoolDifficulty(Streamable):
difficulty: uint64
sub_slot_iters: uint64
pool_contract_puzzle_hash: bytes32
@dataclass(frozen=True)
@streamable
class HarvesterHandshake(Streamable):
@ -29,6 +37,7 @@ class NewSignagePointHarvester(Streamable):
sub_slot_iters: uint64
signage_point_index: uint8
sp_hash: bytes32
pool_difficulties: List[PoolDifficulty]
@dataclass(frozen=True)
@ -59,3 +68,30 @@ class RespondSignatures(Streamable):
local_pk: G1Element
farmer_pk: G1Element
message_signatures: List[Tuple[bytes32, G2Element]]
@dataclass(frozen=True)
@streamable
class Plot(Streamable):
filename: str
size: uint8
plot_id: bytes32
pool_public_key: Optional[G1Element]
pool_contract_puzzle_hash: Optional[bytes32]
plot_public_key: G1Element
file_size: uint64
time_modified: uint64
@dataclass(frozen=True)
@streamable
class RequestPlots(Streamable):
pass
@dataclass(frozen=True)
@streamable
class RespondPlots(Streamable):
plots: List[Plot]
failed_to_open_filenames: List[str]
no_key_filenames: List[str]

View File

@ -1,50 +1,175 @@
from dataclasses import dataclass
from typing import List, Optional
from enum import Enum
import time
from typing import Optional
from blspy import G1Element, G2Element
from chia.types.blockchain_format.proof_of_space import ProofOfSpace
from chia.util.ints import uint32, uint64
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.util.ints import uint8, uint16, uint32, uint64
from chia.util.streamable import Streamable, streamable
"""
Protocol between farmer and pool.
Note: When changing this file, also change protocol_message_types.py, and the protocol version in shared_protocol.py
"""
POOL_PROTOCOL_VERSION = uint8(1)
class PoolErrorCode(Enum):
REVERTED_SIGNAGE_POINT = 1
TOO_LATE = 2
NOT_FOUND = 3
INVALID_PROOF = 4
PROOF_NOT_GOOD_ENOUGH = 5
INVALID_DIFFICULTY = 6
INVALID_SIGNATURE = 7
SERVER_EXCEPTION = 8
INVALID_P2_SINGLETON_PUZZLE_HASH = 9
FARMER_NOT_KNOWN = 10
FARMER_ALREADY_KNOWN = 11
INVALID_AUTHENTICATION_TOKEN = 12
INVALID_PAYOUT_INSTRUCTIONS = 13
INVALID_SINGLETON = 14
DELAY_TIME_TOO_SHORT = 15
REQUEST_FAILED = 16
# Used to verify GET /farmer and GET /login
@dataclass(frozen=True)
@streamable
class AuthenticationPayload(Streamable):
method_name: str
launcher_id: bytes32
target_puzzle_hash: bytes32
authentication_token: uint64
# GET /pool_info
@dataclass(frozen=True)
@streamable
class GetPoolInfoResponse(Streamable):
name: str
logo_url: str
minimum_difficulty: uint64
relative_lock_height: uint32
protocol_version: uint8
fee: str
description: str
target_puzzle_hash: bytes32
authentication_token_timeout: uint8
# POST /partial
@dataclass(frozen=True)
@streamable
class SignedCoinbase(Streamable):
pass
# coinbase_signature: PrependSignature
@dataclass(frozen=True)
@streamable
class RequestData(Streamable):
min_height: Optional[uint32]
farmer_id: Optional[str]
@dataclass(frozen=True)
@streamable
class RespondData(Streamable):
posting_url: str
# pool_public_key: PublicKey
partials_threshold: uint64
coinbase_info: List[SignedCoinbase]
@dataclass(frozen=True)
@streamable
class Partial(Streamable):
# challenge: Challenge
class PostPartialPayload(Streamable):
launcher_id: bytes32
authentication_token: uint64
proof_of_space: ProofOfSpace
farmer_target: str
# Signature of the challenge + farmer target hash
# signature: PrependSignature
sp_hash: bytes32
end_of_sub_slot: bool
harvester_id: bytes32
@dataclass(frozen=True)
@streamable
class PartialAck(Streamable):
pass
class PostPartialRequest(Streamable):
payload: PostPartialPayload
aggregate_signature: G2Element
# Response in success case
@dataclass(frozen=True)
@streamable
class PostPartialResponse(Streamable):
new_difficulty: uint64
# GET /farmer
# Response in success case
@dataclass(frozen=True)
@streamable
class GetFarmerResponse(Streamable):
authentication_public_key: G1Element
payout_instructions: str
current_difficulty: uint64
current_points: uint64
# POST /farmer
@dataclass(frozen=True)
@streamable
class PostFarmerPayload(Streamable):
launcher_id: bytes32
authentication_token: uint64
authentication_public_key: G1Element
payout_instructions: str
suggested_difficulty: Optional[uint64]
@dataclass(frozen=True)
@streamable
class PostFarmerRequest(Streamable):
payload: PostFarmerPayload
signature: G2Element
# Response in success case
@dataclass(frozen=True)
@streamable
class PostFarmerResponse(Streamable):
welcome_message: str
# PUT /farmer
@dataclass(frozen=True)
@streamable
class PutFarmerPayload(Streamable):
launcher_id: bytes32
authentication_token: uint64
authentication_public_key: Optional[G1Element]
payout_instructions: Optional[str]
suggested_difficulty: Optional[uint64]
@dataclass(frozen=True)
@streamable
class PutFarmerRequest(Streamable):
payload: PutFarmerPayload
signature: G2Element
# Response in success case
@dataclass(frozen=True)
@streamable
class PutFarmerResponse(Streamable):
authentication_public_key: Optional[bool]
payout_instructions: Optional[bool]
suggested_difficulty: Optional[bool]
# Misc
# Response in error case for all endpoints of the pool protocol
@dataclass(frozen=True)
@streamable
class ErrorResponse(Streamable):
error_code: uint16
error_message: Optional[str]
# Get the current authentication toke according "Farmer authentication" in SPECIFICATION.md
def get_current_authentication_token(timeout: uint8) -> uint64:
return uint64(int(int(time.time() / 60) / timeout))
# Validate a given authentication token against our local time
def validate_authentication_token(token: uint64, timeout: uint8):
return abs(token - get_current_authentication_token(timeout)) <= timeout

View File

@ -7,7 +7,7 @@ class ProtocolMessageTypes(Enum):
# Harvester protocol (harvester <-> farmer)
harvester_handshake = 3
new_signage_point_harvester = 4
# new_signage_point_harvester = 4 Changed to 66 in new protocol
new_proof_of_space = 5
request_signatures = 6
respond_signatures = 7
@ -81,3 +81,8 @@ class ProtocolMessageTypes(Enum):
# Simulator protocol
farm_new_block = 65
# New harvester protocol
new_signage_point_harvester = 66
request_plots = 67
respond_plots = 68

View File

@ -1,6 +1,7 @@
from typing import Callable, Dict, List
from typing import Callable, Dict, List, Optional
from chia.farmer.farmer import Farmer
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.util.byte_types import hexstr_to_bytes
from chia.util.ws_message import WsRpcMessage, create_payload_dict
@ -16,6 +17,10 @@ class FarmerRpcApi:
"/get_signage_points": self.get_signage_points,
"/get_reward_targets": self.get_reward_targets,
"/set_reward_targets": self.set_reward_targets,
"/get_pool_state": self.get_pool_state,
"/set_payout_instructions": self.set_payout_instructions,
"/get_plots": self.get_plots,
"/get_pool_login_link": self.get_pool_login_link,
}
async def _state_changed(self, change: str, change_data: Dict) -> List[WsRpcMessage]:
@ -93,3 +98,26 @@ class FarmerRpcApi:
self.service.set_reward_targets(farmer_target, pool_target)
return {}
async def get_pool_state(self, _: Dict) -> Dict:
pools_list = []
for p2_singleton_puzzle_hash, pool_dict in self.service.pool_state.items():
pool_state = pool_dict.copy()
pool_state["p2_singleton_puzzle_hash"] = p2_singleton_puzzle_hash.hex()
pools_list.append(pool_state)
return {"pool_state": pools_list}
async def set_payout_instructions(self, request: Dict) -> Dict:
launcher_id: bytes32 = hexstr_to_bytes(request["launcher_id"])
await self.service.set_payout_instructions(launcher_id, request["payout_instructions"])
return {}
async def get_plots(self, _: Dict):
return await self.service.get_plots()
async def get_pool_login_link(self, request: Dict) -> Dict:
launcher_id: bytes32 = bytes32(hexstr_to_bytes(request["launcher_id"]))
login_link: Optional[str] = await self.service.generate_login_link(launcher_id)
if login_link is None:
raise ValueError(f"Failed to generate login link for {launcher_id.hex()}")
return {"login_link": login_link}

View File

@ -1,4 +1,4 @@
from typing import Dict, List, Optional
from typing import Dict, List, Optional, Any
from chia.rpc.rpc_client import RpcClient
from chia.types.blockchain_format.sized_bytes import bytes32
@ -41,3 +41,19 @@ class FarmerRpcClient(RpcClient):
if pool_target is not None:
request["pool_target"] = pool_target
return await self.fetch("set_reward_targets", request)
async def get_pool_state(self) -> Dict:
return await self.fetch("get_pool_state", {})
async def set_payout_instructions(self, launcher_id: bytes32, payout_instructions: str) -> Dict:
request = {"launcher_id": launcher_id.hex(), "payout_instructions": payout_instructions}
return await self.fetch("set_payout_instructions", request)
async def get_plots(self) -> Dict[str, Any]:
return await self.fetch("get_plots", {})
async def get_pool_login_link(self, launcher_id: bytes32) -> Optional[str]:
try:
return (await self.fetch("get_pool_login_link", {"launcher_id": launcher_id.hex()}))["login_link"]
except ValueError:
return None

View File

@ -3,9 +3,13 @@ from typing import Any, Callable, Dict, List, Optional
from chia.consensus.block_record import BlockRecord
from chia.consensus.pos_quality import UI_ACTUAL_SPACE_CONSTANT_FACTOR
from chia.full_node.full_node import FullNode
from chia.full_node.mempool_check_conditions import get_puzzle_and_solution_for_coin
from chia.types.blockchain_format.program import Program, SerializedProgram
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.types.coin_record import CoinRecord
from chia.types.coin_solution import CoinSolution
from chia.types.full_block import FullBlock
from chia.types.generator_types import BlockGenerator
from chia.types.mempool_inclusion_status import MempoolInclusionStatus
from chia.types.spend_bundle import SpendBundle
from chia.types.unfinished_header_block import UnfinishedHeaderBlock
@ -34,11 +38,13 @@ class FullNodeRpcApi:
"/get_additions_and_removals": self.get_additions_and_removals,
"/get_initial_freeze_period": self.get_initial_freeze_period,
"/get_network_info": self.get_network_info,
"/get_recent_signage_point_or_eos": self.get_recent_signage_point_or_eos,
# Coins
"/get_coin_records_by_puzzle_hash": self.get_coin_records_by_puzzle_hash,
"/get_coin_records_by_puzzle_hashes": self.get_coin_records_by_puzzle_hashes,
"/get_coin_record_by_name": self.get_coin_record_by_name,
"/push_tx": self.push_tx,
"/get_puzzle_and_solution": self.get_puzzle_and_solution,
# Mempool
"/get_all_mempool_tx_ids": self.get_all_mempool_tx_ids,
"/get_all_mempool_items": self.get_all_mempool_items,
@ -161,6 +167,95 @@ class FullNodeRpcApi:
address_prefix = self.service.config["network_overrides"]["config"][network_name]["address_prefix"]
return {"network_name": network_name, "network_prefix": address_prefix}
async def get_recent_signage_point_or_eos(self, request: Dict):
if "sp_hash" not in request:
challenge_hash: bytes32 = hexstr_to_bytes(request["challenge_hash"])
# This is the case of getting an end of slot
eos_tuple = self.service.full_node_store.recent_eos.get(challenge_hash)
if not eos_tuple:
raise ValueError(f"Did not find eos {challenge_hash.hex()} in cache")
eos, time_received = eos_tuple
# If it's still in the full node store, it's not reverted
if self.service.full_node_store.get_sub_slot(eos.challenge_chain.get_hash()):
return {"eos": eos, "time_received": time_received, "reverted": False}
# Otherwise we can backtrack from peak to find it in the blockchain
curr: Optional[BlockRecord] = self.service.blockchain.get_peak()
if curr is None:
raise ValueError("No blocks in the chain")
number_of_slots_searched = 0
while number_of_slots_searched < 10:
if curr.first_in_sub_slot:
assert curr.finished_challenge_slot_hashes is not None
if curr.finished_challenge_slot_hashes[-1] == eos.challenge_chain.get_hash():
# Found this slot in the blockchain
return {"eos": eos, "time_received": time_received, "reverted": False}
number_of_slots_searched += len(curr.finished_challenge_slot_hashes)
curr = self.service.blockchain.try_block_record(curr.prev_hash)
if curr is None:
# Got to the beginning of the blockchain without finding the slot
return {"eos": eos, "time_received": time_received, "reverted": True}
# Backtracked through 10 slots but still did not find it
return {"eos": eos, "time_received": time_received, "reverted": True}
# Now we handle the case of getting a signage point
sp_hash: bytes32 = hexstr_to_bytes(request["sp_hash"])
sp_tuple = self.service.full_node_store.recent_signage_points.get(sp_hash)
if sp_tuple is None:
raise ValueError(f"Did not find sp {sp_hash.hex()} in cache")
sp, time_received = sp_tuple
# If it's still in the full node store, it's not reverted
if self.service.full_node_store.get_signage_point(sp_hash):
return {"signage_point": sp, "time_received": time_received, "reverted": False}
# Otherwise we can backtrack from peak to find it in the blockchain
rc_challenge: bytes32 = sp.rc_vdf.challenge
next_b: Optional[BlockRecord] = None
curr_b_optional: Optional[BlockRecord] = self.service.blockchain.get_peak()
assert curr_b_optional is not None
curr_b: BlockRecord = curr_b_optional
for _ in range(200):
sp_total_iters = sp.cc_vdf.number_of_iterations + curr_b.ip_sub_slot_total_iters(self.service.constants)
if curr_b.reward_infusion_new_challenge == rc_challenge:
if next_b is None:
return {"signage_point": sp, "time_received": time_received, "reverted": False}
next_b_total_iters = next_b.ip_sub_slot_total_iters(self.service.constants) + next_b.ip_iters(
self.service.constants
)
return {
"signage_point": sp,
"time_received": time_received,
"reverted": sp_total_iters > next_b_total_iters,
}
if curr_b.finished_reward_slot_hashes is not None:
assert curr_b.finished_challenge_slot_hashes is not None
for eos_rc in curr_b.finished_challenge_slot_hashes:
if eos_rc == rc_challenge:
if next_b is None:
return {"signage_point": sp, "time_received": time_received, "reverted": False}
next_b_total_iters = next_b.ip_sub_slot_total_iters(self.service.constants) + next_b.ip_iters(
self.service.constants
)
return {
"signage_point": sp,
"time_received": time_received,
"reverted": sp_total_iters > next_b_total_iters,
}
next_b = curr_b
curr_b_optional = self.service.blockchain.try_block_record(curr_b.prev_hash)
if curr_b_optional is None:
break
curr_b = curr_b_optional
return {"signage_point": sp, "time_received": time_received, "reverted": True}
async def get_block(self, request: Dict) -> Optional[Dict]:
if "header_hash" not in request:
raise ValueError("No header_hash in request")
@ -393,6 +488,31 @@ class FullNodeRpcApi:
"status": status.name,
}
async def get_puzzle_and_solution(self, request: Dict) -> Optional[Dict]:
coin_name: bytes32 = hexstr_to_bytes(request["coin_id"])
height = request["height"]
coin_record = await self.service.coin_store.get_coin_record(coin_name)
if coin_record is None or not coin_record.spent or coin_record.spent_block_index != height:
raise ValueError(f"Invalid height {height}. coin record {coin_record}")
header_hash = self.service.blockchain.height_to_hash(height)
block: Optional[FullBlock] = await self.service.block_store.get_full_block(header_hash)
if block is None or block.transactions_generator is None:
raise ValueError("Invalid block or block generator")
block_generator: Optional[BlockGenerator] = await self.service.blockchain.get_block_generator(block)
assert block_generator is not None
error, puzzle, solution = get_puzzle_and_solution_for_coin(
block_generator, coin_name, self.service.constants.MAX_BLOCK_COST_CLVM
)
if error is not None:
raise ValueError(f"Error: {error}")
puzzle_ser: SerializedProgram = SerializedProgram.from_program(Program.to(puzzle))
solution_ser: SerializedProgram = SerializedProgram.from_program(Program.to(solution))
return {"coin_solution": CoinSolution(coin_record.coin, puzzle_ser, solution_ser)}
async def get_additions_and_removals(self, request: Dict) -> Optional[Dict]:
if "header_hash" not in request:
raise ValueError("No header_hash in request")

View File

@ -1,9 +1,12 @@
from typing import Dict, List, Optional, Tuple
from typing import Dict, List, Optional, Tuple, Any
from chia.consensus.block_record import BlockRecord
from chia.full_node.signage_point import SignagePoint
from chia.rpc.rpc_client import RpcClient
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.types.coin_record import CoinRecord
from chia.types.coin_solution import CoinSolution
from chia.types.end_of_slot_bundle import EndOfSubSlotBundle
from chia.types.full_block import FullBlock
from chia.types.spend_bundle import SpendBundle
from chia.types.unfinished_header_block import UnfinishedHeaderBlock
@ -72,6 +75,13 @@ class FullNodeRpcClient(RpcClient):
return None
return network_space_bytes_estimate["space"]
async def get_coin_record_by_name(self, coin_id: bytes32) -> Optional[CoinRecord]:
try:
response = await self.fetch("get_coin_record_by_name", {"name": coin_id.hex()})
except Exception:
return None
return CoinRecord.from_json_dict(response["coin_record"])
async def get_coin_records_by_puzzle_hash(
self,
puzzle_hash: bytes32,
@ -133,6 +143,13 @@ class FullNodeRpcClient(RpcClient):
async def push_tx(self, spend_bundle: SpendBundle):
return await self.fetch("push_tx", {"spend_bundle": spend_bundle.to_json_dict()})
async def get_puzzle_and_solution(self, coin_id: bytes32, height: uint32) -> Optional[CoinRecord]:
try:
response = await self.fetch("get_puzzle_and_solution", {"coin_id": coin_id.hex(), "height": height})
return CoinSolution.from_json_dict(response["coin_solution"])
except Exception:
return None
async def get_all_mempool_tx_ids(self) -> List[bytes32]:
response = await self.fetch("get_all_mempool_tx_ids", {})
return [bytes32(hexstr_to_bytes(tx_id_hex)) for tx_id_hex in response["tx_ids"]]
@ -150,3 +167,26 @@ class FullNodeRpcClient(RpcClient):
return response["mempool_item"]
except Exception:
return None
async def get_recent_signage_point_or_eos(
self, sp_hash: Optional[bytes32], challenge_hash: Optional[bytes32]
) -> Optional[Any]:
try:
if sp_hash is not None:
assert challenge_hash is None
response = await self.fetch("get_recent_signage_point_or_eos", {"sp_hash": sp_hash.hex()})
return {
"signage_point": SignagePoint.from_json_dict(response["signage_point"]),
"time_received": response["time_received"],
"reverted": response["reverted"],
}
else:
assert challenge_hash is not None
response = await self.fetch("get_recent_signage_point_or_eos", {"challenge_hash": challenge_hash.hex()})
return {
"eos": EndOfSubSlotBundle.from_json_dict(response["eos"]),
"time_received": response["time_received"],
"reverted": response["reverted"],
}
except Exception:
return None

View File

@ -9,6 +9,8 @@ from blspy import PrivateKey, G1Element
from chia.cmds.init_funcs import check_keys
from chia.consensus.block_rewards import calculate_base_farmer_reward
from chia.pools.pool_wallet import PoolWallet
from chia.pools.pool_wallet_info import create_pool_state, FARMING_TO_POOL, PoolWalletInfo, PoolState
from chia.protocols.protocol_message_types import ProtocolMessageTypes
from chia.server.outbound_message import NodeType, make_msg
from chia.simulator.simulator_protocol import FarmNewBlockProtocol
@ -21,6 +23,7 @@ from chia.util.keychain import bytes_to_mnemonic, generate_mnemonic
from chia.util.path import path_from_root
from chia.util.ws_message import WsRpcMessage, create_payload_dict
from chia.wallet.cc_wallet.cc_wallet import CCWallet
from chia.wallet.derive_keys import master_sk_to_singleton_owner_sk
from chia.wallet.rl_wallet.rl_wallet import RLWallet
from chia.wallet.derive_keys import master_sk_to_farmer_sk, master_sk_to_pool_sk
from chia.wallet.did_wallet.did_wallet import DIDWallet
@ -70,10 +73,12 @@ class WalletRpcApi:
"/get_transactions": self.get_transactions,
"/get_next_address": self.get_next_address,
"/send_transaction": self.send_transaction,
"/send_transaction_multi": self.send_transaction_multi,
"/create_backup": self.create_backup,
"/get_transaction_count": self.get_transaction_count,
"/get_farmed_amount": self.get_farmed_amount,
"/create_signed_transaction": self.create_signed_transaction,
"/delete_unconfirmed_transactions": self.delete_unconfirmed_transactions,
# Coloured coins and trading
"/cc_set_name": self.cc_set_name,
"/cc_get_name": self.cc_get_name,
@ -99,6 +104,11 @@ class WalletRpcApi:
"/rl_set_user_info": self.rl_set_user_info,
"/send_clawback_transaction:": self.send_clawback_transaction,
"/add_rate_limited_funds:": self.add_rate_limited_funds,
# Pool Wallet
"/pw_join_pool": self.pw_join_pool,
"/pw_self_pool": self.pw_self_pool,
"/pw_absorb_rewards": self.pw_absorb_rewards,
"/pw_status": self.pw_status,
}
async def _state_changed(self, *args) -> List[WsRpcMessage]:
@ -332,10 +342,13 @@ class WalletRpcApi:
async def create_new_wallet(self, request: Dict):
assert self.service.wallet_state_manager is not None
wallet_state_manager = self.service.wallet_state_manager
main_wallet = wallet_state_manager.main_wallet
host = request["host"]
if "fee" in request:
fee: uint64 = request["fee"]
else:
fee = uint64(0)
if request["wallet_type"] == "cc_wallet":
if request["mode"] == "new":
async with self.service.wallet_state_manager.lock:
@ -447,6 +460,50 @@ class WalletRpcApi:
"backup_dids": did_wallet.did_info.backup_ids,
"num_verifications_required": did_wallet.did_info.num_of_backup_ids_needed,
}
elif request["wallet_type"] == "pool_wallet":
if request["mode"] == "new":
owner_puzzle_hash: bytes32 = await self.service.wallet_state_manager.main_wallet.get_puzzle_hash(True)
from chia.pools.pool_wallet_info import initial_pool_state_from_dict
async with self.service.wallet_state_manager.lock:
last_wallet: Optional[
WalletInfo
] = await self.service.wallet_state_manager.user_store.get_last_wallet()
assert last_wallet is not None
next_id = last_wallet.id + 1
owner_sk: PrivateKey = master_sk_to_singleton_owner_sk(
self.service.wallet_state_manager.private_key, uint32(next_id)
)
owner_pk: G1Element = owner_sk.get_g1()
initial_target_state = initial_pool_state_from_dict(
request["initial_target_state"], owner_pk, owner_puzzle_hash
)
assert initial_target_state is not None
try:
delayed_address = None
if "p2_singleton_delayed_ph" in request:
delayed_address = hexstr_to_bytes(request["p2_singleton_delayed_ph"])
tr, p2_singleton_puzzle_hash, launcher_id = await PoolWallet.create_new_pool_wallet_transaction(
wallet_state_manager,
main_wallet,
initial_target_state,
fee,
request.get("p2_singleton_delay_time", None),
delayed_address,
)
except Exception as e:
raise ValueError(str(e))
return {
"transaction": tr,
"launcher_id": launcher_id.hex(),
"p2_singleton_puzzle_hash": p2_singleton_puzzle_hash.hex(),
}
elif request["mode"] == "recovery":
raise ValueError("Need upgraded singleton for on-chain recovery")
else: # undefined did_type
pass
@ -591,6 +648,46 @@ class WalletRpcApi:
"transaction_id": tx.name,
}
async def send_transaction_multi(self, request):
assert self.service.wallet_state_manager is not None
if await self.service.wallet_state_manager.synced() is False:
raise ValueError("Wallet needs to be fully synced before sending transactions")
if int(time.time()) < self.service.constants.INITIAL_FREEZE_END_TIMESTAMP:
end_date = datetime.fromtimestamp(float(self.service.constants.INITIAL_FREEZE_END_TIMESTAMP))
raise ValueError(f"No transactions before: {end_date}")
wallet_id = uint32(request["wallet_id"])
wallet = self.service.wallet_state_manager.wallets[wallet_id]
async with self.service.wallet_state_manager.lock:
transaction: TransactionRecord = (await self.create_signed_transaction(request, hold_lock=False))[
"signed_tx"
]
await wallet.push_transaction(transaction)
# Transaction may not have been included in the mempool yet. Use get_transaction to check.
return {
"transaction": transaction,
"transaction_id": transaction.name,
}
async def delete_unconfirmed_transactions(self, request):
wallet_id = uint32(request["wallet_id"])
if wallet_id not in self.service.wallet_state_manager.wallets:
raise ValueError(f"Wallet id {wallet_id} does not exist")
async with self.service.wallet_state_manager.lock:
async with self.service.wallet_state_manager.tx_store.db_wrapper.lock:
await self.service.wallet_state_manager.tx_store.db_wrapper.begin_transaction()
await self.service.wallet_state_manager.tx_store.delete_unconfirmed_transactions(wallet_id)
if self.service.wallet_state_manager.wallets[wallet_id].type() == WalletType.POOLING_WALLET.value:
self.service.wallet_state_manager.wallets[wallet_id].target_state = None
await self.service.wallet_state_manager.tx_store.db_wrapper.commit_transaction()
# Update the cache
await self.service.wallet_state_manager.tx_store.rebuild_tx_cache()
return {}
async def get_transaction_count(self, request):
wallet_id = int(request["wallet_id"])
count = await self.service.wallet_state_manager.tx_store.get_transaction_count_for_wallet(wallet_id)
@ -948,14 +1045,19 @@ class WalletRpcApi:
fee_amount = 0
last_height_farmed = 0
for record in tx_records:
height = record.height_farmed(self.service.constants.GENESIS_CHALLENGE)
if height > last_height_farmed:
last_height_farmed = height
if record.wallet_id not in self.service.wallet_state_manager.wallets:
continue
if record.type == TransactionType.COINBASE_REWARD:
if self.service.wallet_state_manager.wallets[record.wallet_id].type() == WalletType.POOLING_WALLET:
# Don't add pool rewards for pool wallets.
continue
pool_reward_amount += record.amount
height = record.height_farmed(self.service.constants.GENESIS_CHALLENGE)
if record.type == TransactionType.FEE_REWARD:
fee_amount += record.amount - calculate_base_farmer_reward(height)
farmer_reward_amount += calculate_base_farmer_reward(height)
if height > last_height_farmed:
last_height_farmed = height
amount += record.amount
assert amount == pool_reward_amount + farmer_reward_amount + fee_amount
@ -967,7 +1069,7 @@ class WalletRpcApi:
"last_height_farmed": last_height_farmed,
}
async def create_signed_transaction(self, request):
async def create_signed_transaction(self, request, hold_lock=True):
if "additions" not in request or len(request["additions"]) < 1:
raise ValueError("Specify additions list")
@ -996,8 +1098,73 @@ class WalletRpcApi:
if "coins" in request and len(request["coins"]) > 0:
coins = set([Coin.from_json_dict(coin_json) for coin_json in request["coins"]])
async with self.service.wallet_state_manager.lock:
if hold_lock:
async with self.service.wallet_state_manager.lock:
signed_tx = await self.service.wallet_state_manager.main_wallet.generate_signed_transaction(
amount_0, puzzle_hash_0, fee, coins=coins, ignore_max_send_amount=True, primaries=additional_outputs
)
else:
signed_tx = await self.service.wallet_state_manager.main_wallet.generate_signed_transaction(
amount_0, puzzle_hash_0, fee, coins=coins, ignore_max_send_amount=True, primaries=additional_outputs
)
return {"signed_tx": signed_tx}
##########################################################################################
# Pool Wallet
##########################################################################################
async def pw_join_pool(self, request):
wallet_id = uint32(request["wallet_id"])
wallet: PoolWallet = self.service.wallet_state_manager.wallets[wallet_id]
pool_wallet_info: PoolWalletInfo = await wallet.get_current_state()
owner_pubkey = pool_wallet_info.current.owner_pubkey
target_puzzlehash = None
if "target_puzzlehash" in request:
target_puzzlehash = bytes32(hexstr_to_bytes(request["target_puzzlehash"]))
new_target_state: PoolState = create_pool_state(
FARMING_TO_POOL,
target_puzzlehash,
owner_pubkey,
request["pool_url"],
uint32(request["relative_lock_height"]),
)
async with self.service.wallet_state_manager.lock:
tx: TransactionRecord = await wallet.join_pool(new_target_state)
return {"transaction": tx}
async def pw_self_pool(self, request):
# Leaving a pool requires two state transitions.
# First we transition to PoolSingletonState.LEAVING_POOL
# Then we transition to FARMING_TO_POOL or SELF_POOLING
wallet_id = uint32(request["wallet_id"])
wallet: PoolWallet = self.service.wallet_state_manager.wallets[wallet_id]
async with self.service.wallet_state_manager.lock:
tx: TransactionRecord = await wallet.self_pool()
return {"transaction": tx}
async def pw_absorb_rewards(self, request):
"""Perform a sweep of the p2_singleton rewards controlled by the pool wallet singleton"""
if await self.service.wallet_state_manager.synced() is False:
raise ValueError("Wallet needs to be fully synced before collecting rewards")
wallet_id = uint32(request["wallet_id"])
wallet: PoolWallet = self.service.wallet_state_manager.wallets[wallet_id]
fee = uint64(request["fee"])
async with self.service.wallet_state_manager.lock:
transaction: TransactionRecord = await wallet.claim_pool_rewards(fee)
state: PoolWalletInfo = await wallet.get_current_state()
return {"state": state.to_json_dict(), "transaction": transaction}
async def pw_status(self, request):
"""Return the complete state of the Pool wallet with id `request["wallet_id"]`"""
wallet_id = uint32(request["wallet_id"])
wallet: PoolWallet = self.service.wallet_state_manager.wallets[wallet_id]
if wallet.type() != WalletType.POOLING_WALLET.value:
raise ValueError(f"wallet_id {wallet_id} is not a pooling wallet")
state: PoolWalletInfo = await wallet.get_current_state()
unconfirmed_transactions: List[TransactionRecord] = await wallet.get_unconfirmed_transactions()
return {
"state": state.to_json_dict(),
"unconfirmed_transactions": unconfirmed_transactions,
}

View File

@ -1,6 +1,7 @@
from pathlib import Path
from typing import Dict, List
from typing import Dict, List, Optional, Any, Tuple
from chia.pools.pool_wallet_info import PoolWalletInfo
from chia.rpc.rpc_client import RpcClient
from chia.types.blockchain_format.coin import Coin
from chia.types.blockchain_format.sized_bytes import bytes32
@ -14,7 +15,7 @@ class WalletRpcClient(RpcClient):
Client to Chia RPC, connects to a local wallet. Uses HTTP/JSON, and converts back from
JSON into native python objects before returning. All api calls use POST requests.
Note that this is not the same as the peer protocol, or wallet protocol (which run Chia's
protocol on top of TCP), it's a separate protocol on top of HTTP thats provides easy access
protocol on top of TCP), it's a separate protocol on top of HTTP that provides easy access
to the full node.
"""
@ -127,6 +128,30 @@ class WalletRpcClient(RpcClient):
)
return TransactionRecord.from_json_dict(res["transaction"])
async def send_transaction_multi(
self, wallet_id: str, additions: List[Dict], coins: List[Coin] = None, fee: uint64 = uint64(0)
) -> TransactionRecord:
# Converts bytes to hex for puzzle hashes
additions_hex = [{"amount": ad["amount"], "puzzle_hash": ad["puzzle_hash"].hex()} for ad in additions]
if coins is not None and len(coins) > 0:
coins_json = [c.to_json_dict() for c in coins]
response: Dict = await self.fetch(
"send_transaction_multi",
{"wallet_id": wallet_id, "additions": additions_hex, "coins": coins_json, "fee": fee},
)
else:
response = await self.fetch(
"send_transaction_multi", {"wallet_id": wallet_id, "additions": additions_hex, "fee": fee}
)
return TransactionRecord.from_json_dict(response["transaction"])
async def delete_unconfirmed_transactions(self, wallet_id: str) -> None:
await self.fetch(
"delete_unconfirmed_transactions",
{"wallet_id": wallet_id},
)
return None
async def create_backup(self, file_path: Path) -> None:
return await self.fetch("create_backup", {"file_path": str(file_path.resolve())})
@ -135,16 +160,72 @@ class WalletRpcClient(RpcClient):
async def create_signed_transaction(
self, additions: List[Dict], coins: List[Coin] = None, fee: uint64 = uint64(0)
) -> Dict:
) -> TransactionRecord:
# Converts bytes to hex for puzzle hashes
additions_hex = [{"amount": ad["amount"], "puzzle_hash": ad["puzzle_hash"].hex()} for ad in additions]
if coins is not None and len(coins) > 0:
coins_json = [c.to_json_dict() for c in coins]
return await self.fetch(
response: Dict = await self.fetch(
"create_signed_transaction", {"additions": additions_hex, "coins": coins_json, "fee": fee}
)
else:
return await self.fetch("create_signed_transaction", {"additions": additions_hex, "fee": fee})
response = await self.fetch("create_signed_transaction", {"additions": additions_hex, "fee": fee})
return TransactionRecord.from_json_dict(response["signed_tx"])
async def create_new_pool_wallet(
self,
target_puzzlehash: Optional[bytes32],
pool_url: Optional[str],
relative_lock_height: uint32,
backup_host: str,
mode: str,
state: str,
p2_singleton_delay_time: Optional[uint64] = None,
p2_singleton_delayed_ph: Optional[bytes32] = None,
) -> TransactionRecord:
# TODO: add APIs for coloured coins and RL wallet
request: Dict[str, Any] = {
"wallet_type": "pool_wallet",
"mode": mode,
"host": backup_host,
"initial_target_state": {
"target_puzzle_hash": target_puzzlehash.hex() if target_puzzlehash else None,
"relative_lock_height": relative_lock_height,
"pool_url": pool_url,
"state": state,
},
}
if p2_singleton_delay_time is not None:
request["p2_singleton_delay_time"] = p2_singleton_delay_time
if p2_singleton_delayed_ph is not None:
request["p2_singleton_delayed_ph"] = p2_singleton_delayed_ph.hex()
res = await self.fetch("create_new_wallet", request)
return TransactionRecord.from_json_dict(res["transaction"])
async def pw_self_pool(self, wallet_id: str) -> TransactionRecord:
return TransactionRecord.from_json_dict(
(await self.fetch("pw_self_pool", {"wallet_id": wallet_id}))["transaction"]
)
async def pw_join_pool(
self, wallet_id: str, target_puzzlehash: bytes32, pool_url: str, relative_lock_height: uint32
) -> TransactionRecord:
request = {
"wallet_id": int(wallet_id),
"target_puzzlehash": target_puzzlehash.hex(),
"relative_lock_height": relative_lock_height,
"pool_url": pool_url,
}
return TransactionRecord.from_json_dict((await self.fetch("pw_join_pool", request))["transaction"])
async def pw_absorb_rewards(self, wallet_id: str, fee: uint64 = uint64(0)) -> TransactionRecord:
return TransactionRecord.from_json_dict(
(await self.fetch("pw_absorb_rewards", {"wallet_id": wallet_id, "fee": fee}))["transaction"]
)
async def pw_status(self, wallet_id: str) -> Tuple[PoolWalletInfo, List[TransactionRecord]]:
json_dict = await self.fetch("pw_status", {"wallet_id": wallet_id})
return (
PoolWalletInfo.from_json_dict(json_dict["state"]),
[TransactionRecord.from_json_dict(tr) for tr in json_dict["unconfirmed_transactions"]],
)

View File

@ -79,7 +79,7 @@ class FullNodeDiscovery:
self.cleanup_task: Optional[asyncio.Task] = None
self.initial_wait: int = 0
try:
self.resolver = dns.asyncresolver.Resolver()
self.resolver: Optional[dns.asyncresolver.Resolver] = dns.asyncresolver.Resolver()
except Exception:
self.resolver = None
self.log.exception("Error initializing asyncresolver")

View File

@ -95,6 +95,8 @@ rate_limits_other = {
ProtocolMessageTypes.request_peers_introducer: RLSettings(100, 100),
ProtocolMessageTypes.respond_peers_introducer: RLSettings(100, 1024 * 1024),
ProtocolMessageTypes.farm_new_block: RLSettings(200, 200),
ProtocolMessageTypes.request_plots: RLSettings(10, 10 * 1024 * 1024),
ProtocolMessageTypes.respond_plots: RLSettings(10, 10 * 1024 * 1024),
}

View File

@ -15,7 +15,7 @@ class Coin(Streamable):
This structure is used in the body for the reward and fees genesis coins.
"""
parent_coin_info: bytes32
parent_coin_info: bytes32 # down with this sort of thing.
puzzle_hash: bytes32
amount: uint64

View File

@ -1,5 +1,5 @@
import io
from typing import List, Optional, Set, Tuple
from typing import List, Set, Tuple
from clvm import KEYWORD_FROM_ATOM, KEYWORD_TO_ATOM, SExp
from clvm import run_program as default_run_program
@ -54,6 +54,9 @@ class Program(SExp):
assert f.read() == b""
return result
def to_serialized_program(self) -> "SerializedProgram":
return SerializedProgram.from_bytes(bytes(self))
def __bytes__(self) -> bytes:
f = io.BytesIO()
self.stream(f) # type: ignore # noqa
@ -82,8 +85,11 @@ class Program(SExp):
cost, r = curry(self, list(args))
return Program.to(r)
def uncurry(self) -> Optional[Tuple["Program", "Program"]]:
return uncurry(self)
def uncurry(self) -> Tuple["Program", "Program"]:
r = uncurry(self)
if r is None:
return self, self.to(0)
return r
def as_int(self) -> int:
return int_from_bytes(self.as_atom())
@ -160,6 +166,18 @@ class SerializedProgram:
ret._buf = bytes(blob)
return ret
@classmethod
def from_program(cls, p: Program) -> "SerializedProgram":
ret = SerializedProgram()
ret._buf = bytes(p)
return ret
def to_program(self) -> Program:
return Program.from_bytes(self._buf)
def uncurry(self) -> Tuple["Program", "Program"]:
return self.to_program().uncurry()
def __bytes__(self) -> bytes:
return self._buf

View File

@ -3,7 +3,7 @@ from dataclasses import dataclass
from typing import Optional
from bitstring import BitArray
from blspy import G1Element
from blspy import G1Element, AugSchemeMPL, PrivateKey
from chiapos import Verifier
from chia.consensus.constants import ConsensusConstants
@ -39,20 +39,26 @@ class ProofOfSpace(Streamable):
) -> Optional[bytes32]:
# Exactly one of (pool_public_key, pool_contract_puzzle_hash) must not be None
if (self.pool_public_key is None) and (self.pool_contract_puzzle_hash is None):
log.error("Fail 1")
return None
if (self.pool_public_key is not None) and (self.pool_contract_puzzle_hash is not None):
log.error("Fail 2")
return None
if self.size < constants.MIN_PLOT_SIZE:
log.error("Fail 3")
return None
if self.size > constants.MAX_PLOT_SIZE:
log.error("Fail 4")
return None
plot_id: bytes32 = self.get_plot_id()
new_challenge: bytes32 = ProofOfSpace.calculate_pos_challenge(plot_id, original_challenge_hash, signage_point)
if new_challenge != self.challenge:
log.error("New challenge is not challenge")
return None
if not ProofOfSpace.passes_plot_filter(constants, plot_id, original_challenge_hash, signage_point):
log.error("Fail 5")
return None
return self.get_quality_string(plot_id)
@ -98,5 +104,15 @@ class ProofOfSpace(Streamable):
return std_hash(bytes(pool_contract_puzzle_hash) + bytes(plot_public_key))
@staticmethod
def generate_plot_public_key(local_pk: G1Element, farmer_pk: G1Element) -> G1Element:
return local_pk + farmer_pk
def generate_taproot_sk(local_pk: G1Element, farmer_pk: G1Element) -> PrivateKey:
taproot_message: bytes = bytes(local_pk + farmer_pk) + bytes(local_pk) + bytes(farmer_pk)
taproot_hash: bytes32 = std_hash(taproot_message)
return AugSchemeMPL.key_gen(taproot_hash)
@staticmethod
def generate_plot_public_key(local_pk: G1Element, farmer_pk: G1Element, include_taproot: bool = False) -> G1Element:
if include_taproot:
taproot_sk: PrivateKey = ProofOfSpace.generate_taproot_sk(local_pk, farmer_pk)
return local_pk + farmer_pk + taproot_sk.get_g1()
else:
return local_pk + farmer_pk

View File

@ -16,7 +16,7 @@ from chia.util.streamable import Streamable, streamable
log = logging.getLogger(__name__)
@lru_cache(maxsize=20)
@lru_cache(maxsize=200)
def get_discriminant(challenge, size_bites) -> int:
return int(
create_discriminant(challenge, size_bites),
@ -24,7 +24,7 @@ def get_discriminant(challenge, size_bites) -> int:
)
@lru_cache(maxsize=100)
@lru_cache(maxsize=1000)
def verify_vdf(
disc: int,
input_el: bytes100,

View File

@ -6,6 +6,7 @@ from blspy import AugSchemeMPL, G2Element
from chia.types.blockchain_format.coin import Coin
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.util.streamable import Streamable, streamable
from chia.wallet.util.debug_spend_bundle import debug_spend_bundle
from .coin_solution import CoinSolution
@ -53,6 +54,9 @@ class SpendBundle(Streamable):
def name(self) -> bytes32:
return self.get_hash()
def debug(self, agg_sig_additional_data=bytes([3] * 32)):
debug_spend_bundle(self, agg_sig_additional_data)
def not_ephemeral_additions(self):
all_removals = self.removals()
all_additions = self.additions()

View File

@ -15,8 +15,8 @@ def initial_config_file(filename: Union[str, Path]) -> str:
return pkg_resources.resource_string(__name__, f"initial-{filename}").decode()
def create_default_chia_config(root_path: Path) -> None:
for filename in ["config.yaml"]:
def create_default_chia_config(root_path: Path, filenames=["config.yaml"]) -> None:
for filename in filenames:
default_config_file_data = initial_config_file(filename)
path = config_path_for_filename(root_path, filename)
mkdir(path.parent)

View File

@ -283,7 +283,7 @@ class CCWallet:
assert self.cc_info.my_genesis_checker is not None
return bytes(self.cc_info.my_genesis_checker).hex()
async def coin_added(self, coin: Coin, header_hash: bytes32, removals: List[Coin], height: uint32):
async def coin_added(self, coin: Coin, height: uint32):
"""Notification from wallet state manager that wallet has been received."""
self.log.info(f"CC wallet has been notified that {coin} was added")

View File

@ -1,6 +1,6 @@
from typing import List
from typing import List, Optional
from blspy import AugSchemeMPL, PrivateKey
from blspy import AugSchemeMPL, PrivateKey, G1Element
from chia.util.ints import uint32
@ -8,7 +8,7 @@ from chia.util.ints import uint32
# https://eips.ethereum.org/EIPS/eip-2334
# 12381 = bls spec number
# 8444 = Chia blockchain number and port number
# 0, 1, 2, 3, 4, farmer, pool, wallet, local, backup key numbers
# 0, 1, 2, 3, 4, 5, 6 farmer, pool, wallet, local, backup key, singleton, pooling authentication key numbers
def _derive_path(sk: PrivateKey, path: List[int]) -> PrivateKey:
@ -35,3 +35,40 @@ def master_sk_to_local_sk(master: PrivateKey) -> PrivateKey:
def master_sk_to_backup_sk(master: PrivateKey) -> PrivateKey:
return _derive_path(master, [12381, 8444, 4, 0])
def master_sk_to_singleton_owner_sk(master: PrivateKey, wallet_id: uint32) -> PrivateKey:
"""
This key controls a singleton on the blockchain, allowing for dynamic pooling (changing pools)
"""
return _derive_path(master, [12381, 8444, 5, wallet_id])
def master_sk_to_pooling_authentication_sk(master: PrivateKey, wallet_id: uint32, index: uint32) -> PrivateKey:
"""
This key is used for the farmer to authenticate to the pool when sending partials
"""
assert index < 10000
assert wallet_id < 10000
return _derive_path(master, [12381, 8444, 6, wallet_id * 10000 + index])
async def find_owner_sk(all_sks: List[PrivateKey], owner_pk: G1Element) -> Optional[G1Element]:
for wallet_id in range(50):
for sk in all_sks:
auth_sk = master_sk_to_singleton_owner_sk(sk, uint32(wallet_id))
if auth_sk.get_g1() == owner_pk:
return auth_sk
return None
async def find_authentication_sk(all_sks: List[PrivateKey], authentication_pk: G1Element) -> Optional[PrivateKey]:
# NOTE: might need to increase this if using a large number of wallets, or have switched authentication keys
# many times.
for auth_key_index in range(20):
for wallet_id in range(20):
for sk in all_sks:
auth_sk = master_sk_to_pooling_authentication_sk(sk, uint32(wallet_id), uint32(auth_key_index))
if auth_sk.get_g1() == authentication_pk:
return auth_sk
return None

View File

@ -4,7 +4,7 @@ from typing import List, Optional, Tuple
from chia.types.blockchain_format.sized_bytes import bytes32
from chia.util.ints import uint64
from chia.util.streamable import streamable, Streamable
from chia.wallet.cc_wallet.ccparent import CCParent
from chia.wallet.lineage_proof import LineageProof
from chia.types.blockchain_format.program import Program
from chia.types.blockchain_format.coin import Coin
@ -15,7 +15,7 @@ class DIDInfo(Streamable):
origin_coin: Optional[Coin] # puzzlehash of this coin is our DID
backup_ids: List[bytes]
num_of_backup_ids_needed: uint64
parent_info: List[Tuple[bytes32, Optional[CCParent]]] # {coin.name(): CCParent}
parent_info: List[Tuple[bytes32, Optional[LineageProof]]] # {coin.name(): LineageProof}
current_inner: Optional[Program] # represents a Program as bytes
temp_coin: Optional[Coin] # partially recovered wallet uses these to hold info
temp_puzhash: Optional[bytes32]

View File

@ -11,7 +11,7 @@ from chia.protocols.wallet_protocol import RespondAdditions, RejectAdditionsRequ
from chia.server.outbound_message import NodeType
from chia.types.blockchain_format.coin import Coin
from chia.types.coin_solution import CoinSolution
from chia.types.announcement import Announcement
from chia.types.blockchain_format.program import Program
from chia.types.spend_bundle import SpendBundle
from chia.types.blockchain_format.sized_bytes import bytes32
@ -19,7 +19,7 @@ from chia.wallet.util.transaction_type import TransactionType
from chia.util.ints import uint64, uint32, uint8
from chia.wallet.did_wallet.did_info import DIDInfo
from chia.wallet.cc_wallet.ccparent import CCParent
from chia.wallet.lineage_proof import LineageProof
from chia.wallet.transaction_record import TransactionRecord
from chia.wallet.util.wallet_types import WalletType
from chia.wallet.wallet import Wallet
@ -83,7 +83,7 @@ class DIDWallet:
await self.wallet_state_manager.add_new_wallet(self, self.wallet_info.id)
assert self.did_info.origin_coin is not None
did_puzzle_hash = did_wallet_puzzles.create_fullpuz(
self.did_info.current_inner, self.did_info.origin_coin.puzzle_hash
self.did_info.current_inner, self.did_info.origin_coin.name()
).get_tree_hash()
did_record = TransactionRecord(
@ -275,7 +275,7 @@ class DIDWallet:
return used_coins
# This will be used in the recovery case where we don't have the parent info already
async def coin_added(self, coin: Coin, header_hash: bytes32, removals: List[Coin], height: int):
async def coin_added(self, coin: Coin, _: uint32):
"""Notification from wallet state manager that wallet has been received."""
self.log.info("DID wallet has been notified that coin was added")
inner_puzzle = await self.inner_puzzle_for_did_puzzle(coin.puzzle_hash)
@ -291,7 +291,7 @@ class DIDWallet:
)
await self.save_info(new_info, True)
future_parent = CCParent(
future_parent = LineageProof(
coin.parent_coin_info,
inner_puzzle.get_tree_hash(),
coin.amount,
@ -345,7 +345,7 @@ class DIDWallet:
await self.save_info(did_info, False)
await self.wallet_state_manager.update_wallet_puzzle_hashes(self.wallet_info.id)
full_puz = did_wallet_puzzles.create_fullpuz(innerpuz, origin.puzzle_hash)
full_puz = did_wallet_puzzles.create_fullpuz(innerpuz, origin.name())
full_puzzle_hash = full_puz.get_tree_hash()
(
sub_height,
@ -381,7 +381,7 @@ class DIDWallet:
if puzzle_hash == full_puzzle_hash:
# our coin
for coin in coins:
future_parent = CCParent(
future_parent = LineageProof(
coin.parent_coin_info,
innerpuz.get_tree_hash(),
coin.amount,
@ -410,7 +410,7 @@ class DIDWallet:
pubkey, self.did_info.backup_ids, self.did_info.num_of_backup_ids_needed
)
if self.did_info.origin_coin is not None:
return did_wallet_puzzles.create_fullpuz(innerpuz, self.did_info.origin_coin.puzzle_hash)
return did_wallet_puzzles.create_fullpuz(innerpuz, self.did_info.origin_coin.name())
else:
return did_wallet_puzzles.create_fullpuz(innerpuz, 0x00)
@ -421,31 +421,30 @@ class DIDWallet:
def get_my_DID(self) -> str:
assert self.did_info.origin_coin is not None
core = self.did_info.origin_coin.puzzle_hash
core = self.did_info.origin_coin.name()
assert core is not None
return core.hex()
# This is used to cash out, or update the id_list
async def create_spend(self, puzhash: bytes32):
async def create_update_spend(self):
assert self.did_info.current_inner is not None
assert self.did_info.origin_coin is not None
coins = await self.select_coins(1)
assert coins is not None
coin = coins.pop()
# innerpuz solution is (mode amount new_puz identity my_puz)
innersol: Program = Program.to([0, coin.amount, puzhash, coin.name(), coin.puzzle_hash])
new_puzhash = await self.get_new_inner_hash()
# innerpuz solution is (mode amount messages new_puz)
innersol: Program = Program.to([1, coin.amount, [], new_puzhash])
# full solution is (corehash parent_info my_amount innerpuz_reveal solution)
innerpuz: Program = self.did_info.current_inner
full_puzzle: Program = did_wallet_puzzles.create_fullpuz(
innerpuz,
self.did_info.origin_coin.puzzle_hash,
self.did_info.origin_coin.name(),
)
parent_info = await self.get_parent_for_coin(coin)
assert parent_info is not None
fullsol = Program.to(
[
[self.did_info.origin_coin.parent_coin_info, self.did_info.origin_coin.amount],
[
parent_info.parent_name,
parent_info.inner_puzzle_hash,
@ -458,7 +457,141 @@ class DIDWallet:
list_of_solutions = [CoinSolution(coin, full_puzzle, fullsol)]
# sign for AGG_SIG_ME
message = (
Program.to([coin.amount, puzhash]).get_tree_hash()
Program.to([new_puzhash, coin.amount, []]).get_tree_hash()
+ coin.name()
+ self.wallet_state_manager.constants.AGG_SIG_ME_ADDITIONAL_DATA
)
pubkey = did_wallet_puzzles.get_pubkey_from_innerpuz(innerpuz)
index = await self.wallet_state_manager.puzzle_store.index_for_pubkey(pubkey)
private = master_sk_to_wallet_sk(self.wallet_state_manager.private_key, index)
signature = AugSchemeMPL.sign(private, message)
# assert signature.validate([signature.PkMessagePair(pubkey, message)])
sigs = [signature]
aggsig = AugSchemeMPL.aggregate(sigs)
spend_bundle = SpendBundle(list_of_solutions, aggsig)
did_record = TransactionRecord(
confirmed_at_height=uint32(0),
created_at_time=uint64(int(time.time())),
to_puzzle_hash=new_puzhash,
amount=uint64(coin.amount),
fee_amount=uint64(0),
confirmed=False,
sent=uint32(0),
spend_bundle=spend_bundle,
additions=spend_bundle.additions(),
removals=spend_bundle.removals(),
wallet_id=self.wallet_info.id,
sent_to=[],
trade_id=None,
type=uint32(TransactionType.OUTGOING_TX.value),
name=token_bytes(),
)
await self.standard_wallet.push_transaction(did_record)
return spend_bundle
# The message spend can send messages and also change your innerpuz
async def create_message_spend(self, messages: List[bytes], new_innerpuzhash: Optional[bytes32] = None):
assert self.did_info.current_inner is not None
assert self.did_info.origin_coin is not None
coins = await self.select_coins(1)
assert coins is not None
coin = coins.pop()
innerpuz: Program = self.did_info.current_inner
if new_innerpuzhash is None:
new_innerpuzhash = innerpuz.get_tree_hash()
# innerpuz solution is (mode amount messages new_puz)
innersol: Program = Program.to([1, coin.amount, messages, new_innerpuzhash])
# full solution is (corehash parent_info my_amount innerpuz_reveal solution)
full_puzzle: Program = did_wallet_puzzles.create_fullpuz(
innerpuz,
self.did_info.origin_coin.name(),
)
parent_info = await self.get_parent_for_coin(coin)
assert parent_info is not None
fullsol = Program.to(
[
[
parent_info.parent_name,
parent_info.inner_puzzle_hash,
parent_info.amount,
],
coin.amount,
innersol,
]
)
list_of_solutions = [CoinSolution(coin, full_puzzle, fullsol)]
# sign for AGG_SIG_ME
# new_inner_puzhash amount message
message = (
Program.to([new_innerpuzhash, coin.amount, messages]).get_tree_hash()
+ coin.name()
+ self.wallet_state_manager.constants.AGG_SIG_ME_ADDITIONAL_DATA
)
pubkey = did_wallet_puzzles.get_pubkey_from_innerpuz(innerpuz)
index = await self.wallet_state_manager.puzzle_store.index_for_pubkey(pubkey)
private = master_sk_to_wallet_sk(self.wallet_state_manager.private_key, index)
signature = AugSchemeMPL.sign(private, message)
# assert signature.validate([signature.PkMessagePair(pubkey, message)])
sigs = [signature]
aggsig = AugSchemeMPL.aggregate(sigs)
spend_bundle = SpendBundle(list_of_solutions, aggsig)
did_record = TransactionRecord(
confirmed_at_height=uint32(0),
created_at_time=uint64(int(time.time())),
to_puzzle_hash=new_innerpuzhash,
amount=uint64(coin.amount),
fee_amount=uint64(0),
confirmed=False,
sent=uint32(0),
spend_bundle=spend_bundle,
additions=spend_bundle.additions(),
removals=spend_bundle.removals(),
wallet_id=self.wallet_info.id,
sent_to=[],
trade_id=None,
type=uint32(TransactionType.OUTGOING_TX.value),
name=token_bytes(),
)
await self.standard_wallet.push_transaction(did_record)
return spend_bundle
# This is used to cash out, or update the id_list
async def create_exit_spend(self, puzhash: bytes32):
assert self.did_info.current_inner is not None
assert self.did_info.origin_coin is not None
coins = await self.select_coins(1)
assert coins is not None
coin = coins.pop()
amount = coin.amount - 1
# innerpuz solution is (mode amount new_puzhash)
innersol: Program = Program.to([0, amount, puzhash])
# full solution is (corehash parent_info my_amount innerpuz_reveal solution)
innerpuz: Program = self.did_info.current_inner
full_puzzle: Program = did_wallet_puzzles.create_fullpuz(
innerpuz,
self.did_info.origin_coin.name(),
)
parent_info = await self.get_parent_for_coin(coin)
assert parent_info is not None
fullsol = Program.to(
[
[
parent_info.parent_name,
parent_info.inner_puzzle_hash,
parent_info.amount,
],
coin.amount,
innersol,
]
)
list_of_solutions = [CoinSolution(coin, full_puzzle, fullsol)]
# sign for AGG_SIG_ME
message = (
Program.to([amount, puzhash]).get_tree_hash()
+ coin.name()
+ self.wallet_state_manager.constants.AGG_SIG_ME_ADDITIONAL_DATA
)
@ -503,20 +636,20 @@ class DIDWallet:
coin = coins.pop()
message = did_wallet_puzzles.create_recovery_message_puzzle(recovering_coin_name, newpuz, pubkey)
innermessage = message.get_tree_hash()
# innerpuz solution is (mode amount new_puz identity my_puz)
innersol = Program.to([1, coin.amount, innermessage, recovering_coin_name, coin.puzzle_hash])
# full solution is (corehash parent_info my_amount innerpuz_reveal solution)
innerpuz: Program = self.did_info.current_inner
# innerpuz solution is (mode, amount, message, new_inner_puzhash)
innersol = Program.to([1, coin.amount, [innermessage], innerpuz.get_tree_hash()])
# full solution is (corehash parent_info my_amount innerpuz_reveal solution)
full_puzzle: Program = did_wallet_puzzles.create_fullpuz(
innerpuz,
self.did_info.origin_coin.puzzle_hash,
self.did_info.origin_coin.name(),
)
parent_info = await self.get_parent_for_coin(coin)
assert parent_info is not None
fullsol = Program.to(
[
[self.did_info.origin_coin.parent_coin_info, self.did_info.origin_coin.amount],
[
parent_info.parent_name,
parent_info.inner_puzzle_hash,
@ -528,10 +661,9 @@ class DIDWallet:
)
list_of_solutions = [CoinSolution(coin, full_puzzle, fullsol)]
message_spend = did_wallet_puzzles.create_spend_for_message(coin.name(), recovering_coin_name, newpuz, pubkey)
message_spend_bundle = SpendBundle([message_spend], AugSchemeMPL.aggregate([]))
# sign for AGG_SIG_ME
to_sign = Program.to([coin.puzzle_hash, coin.amount, innermessage]).get_tree_hash()
to_sign = Program.to([innerpuz.get_tree_hash(), coin.amount, [innermessage]]).get_tree_hash()
message = to_sign + coin.name() + self.wallet_state_manager.constants.AGG_SIG_ME_ADDITIONAL_DATA
pubkey = did_wallet_puzzles.get_pubkey_from_innerpuz(innerpuz)
index = await self.wallet_state_manager.puzzle_store.index_for_pubkey(pubkey)
@ -629,31 +761,29 @@ class DIDWallet:
spend_bundle: SpendBundle,
) -> SpendBundle:
assert self.did_info.origin_coin is not None
# innerpuz solution is (mode amount new_puz identity my_puz parent_innerpuzhash_amounts_for_recovery_ids)
# innersol is (mode amount new_puz my_puzhash parent_innerpuzhash_amounts_for_recovery_ids pubkey recovery_list_reveal) # noqa
innersol = Program.to(
[
2,
coin.amount,
puzhash,
coin.name(),
coin.puzzle_hash,
puzhash,
parent_innerpuzhash_amounts_for_recovery_ids,
bytes(pubkey),
self.did_info.backup_ids,
self.did_info.num_of_backup_ids_needed,
]
)
# full solution is (parent_info my_amount solution)
innerpuz = self.did_info.current_inner
full_puzzle: Program = did_wallet_puzzles.create_fullpuz(
innerpuz,
self.did_info.origin_coin.puzzle_hash,
self.did_info.origin_coin.name(),
)
parent_info = await self.get_parent_for_coin(coin)
assert parent_info is not None
fullsol = Program.to(
[
[self.did_info.origin_coin.parent_coin_info, self.did_info.origin_coin.amount],
[
parent_info.parent_name,
parent_info.inner_puzzle_hash,
@ -734,7 +864,7 @@ class DIDWallet:
)
return inner_puzzle
async def get_parent_for_coin(self, coin) -> Optional[CCParent]:
async def get_parent_for_coin(self, coin) -> Optional[LineageProof]:
parent_info = None
for name, ccparent in self.did_info.parent_info:
if name == coin.parent_coin_info:
@ -752,25 +882,36 @@ class DIDWallet:
return None
origin = coins.copy().pop()
genesis_launcher_puz = did_wallet_puzzles.SINGLETON_LAUNCHER
launcher_coin = Coin(origin.name(), genesis_launcher_puz.get_tree_hash(), amount)
did_inner: Program = await self.get_new_innerpuz()
did_inner_hash = did_inner.get_tree_hash()
did_puz = did_wallet_puzzles.create_fullpuz(did_inner, origin.puzzle_hash)
did_puzzle_hash = did_puz.get_tree_hash()
did_full_puz = did_wallet_puzzles.create_fullpuz(did_inner, launcher_coin.name())
did_puzzle_hash = did_full_puz.get_tree_hash()
announcement_set: Set[Announcement] = set()
announcement_message = Program.to([did_puzzle_hash, amount, bytes(0x80)]).get_tree_hash()
announcement_set.add(Announcement(launcher_coin.name(), announcement_message).name())
tx_record: Optional[TransactionRecord] = await self.standard_wallet.generate_signed_transaction(
amount, did_puzzle_hash, uint64(0), origin.name(), coins
amount, genesis_launcher_puz.get_tree_hash(), uint64(0), origin.name(), coins, None, False, announcement_set
)
eve_coin = Coin(origin.name(), did_puzzle_hash, amount)
future_parent = CCParent(
genesis_launcher_solution = Program.to([did_puzzle_hash, amount, bytes(0x80)])
launcher_cs = CoinSolution(launcher_coin, genesis_launcher_puz, genesis_launcher_solution)
launcher_sb = SpendBundle([launcher_cs], AugSchemeMPL.aggregate([]))
eve_coin = Coin(launcher_coin.name(), did_puzzle_hash, amount)
future_parent = LineageProof(
eve_coin.parent_coin_info,
did_inner_hash,
eve_coin.amount,
)
eve_parent = CCParent(
origin.parent_coin_info,
origin.puzzle_hash,
origin.amount,
eve_parent = LineageProof(
launcher_coin.parent_coin_info,
launcher_coin.puzzle_hash,
launcher_coin.amount,
)
await self.add_parent(eve_coin.parent_coin_info, eve_parent, False)
await self.add_parent(eve_coin.name(), future_parent, False)
@ -780,7 +921,7 @@ class DIDWallet:
# Only want to save this information if the transaction is valid
did_info: DIDInfo = DIDInfo(
origin,
launcher_coin,
self.did_info.backup_ids,
self.did_info.num_of_backup_ids_needed,
self.did_info.parent_info,
@ -790,19 +931,18 @@ class DIDWallet:
None,
)
await self.save_info(did_info, False)
eve_spend = await self.generate_eve_spend(eve_coin, did_puz, did_inner)
full_spend = SpendBundle.aggregate([tx_record.spend_bundle, eve_spend])
eve_spend = await self.generate_eve_spend(eve_coin, did_full_puz, did_inner)
full_spend = SpendBundle.aggregate([tx_record.spend_bundle, eve_spend, launcher_sb])
return full_spend
async def generate_eve_spend(self, coin: Coin, full_puzzle: Program, innerpuz: Program):
assert self.did_info.origin_coin is not None
# innerpuz solution is (mode amount message my_id my_puzhash parent_innerpuzhash_amounts_for_recovery_ids)
innersol = Program.to([0, coin.amount, coin.puzzle_hash, coin.name(), coin.puzzle_hash, []])
# innerpuz solution is (mode amount message new_puzhash)
innersol = Program.to([1, coin.amount, [], innerpuz.get_tree_hash()])
# full solution is (parent_info my_amount innersolution)
fullsol = Program.to(
[
[self.did_info.origin_coin.parent_coin_info, self.did_info.origin_coin.amount],
coin.parent_coin_info,
coin.amount,
innersol,
]
@ -810,7 +950,7 @@ class DIDWallet:
list_of_solutions = [CoinSolution(coin, full_puzzle, fullsol)]
# sign for AGG_SIG_ME
message = (
Program.to([coin.amount, coin.puzzle_hash]).get_tree_hash()
Program.to([innerpuz.get_tree_hash(), coin.amount, []]).get_tree_hash()
+ coin.name()
+ self.wallet_state_manager.constants.AGG_SIG_ME_ADDITIONAL_DATA
)
@ -837,7 +977,7 @@ class DIDWallet:
return max_send_amount
async def add_parent(self, name: bytes32, parent: Optional[CCParent], in_transaction: bool):
async def add_parent(self, name: bytes32, parent: Optional[LineageProof], in_transaction: bool):
self.log.info(f"Adding parent {name}: {parent}")
current_list = self.did_info.parent_info.copy()
current_list.append((name, parent))

View File

@ -9,18 +9,22 @@ from chia.util.ints import uint64
from chia.wallet.puzzles.load_clvm import load_clvm
from chia.types.condition_opcodes import ConditionOpcode
DID_CORE_MOD = load_clvm("singleton_top_layer.clvm")
SINGLETON_TOP_LAYER_MOD = load_clvm("singleton_top_layer.clvm")
LAUNCHER_PUZZLE = load_clvm("singleton_launcher.clvm")
DID_INNERPUZ_MOD = load_clvm("did_innerpuz.clvm")
SINGLETON_LAUNCHER = load_clvm("singleton_launcher.clvm")
def create_innerpuz(pubkey: bytes, identities: List[bytes], num_of_backup_ids_needed: uint64) -> Program:
backup_ids_hash = Program(Program.to(identities)).get_tree_hash()
return DID_INNERPUZ_MOD.curry(DID_CORE_MOD.get_tree_hash(), pubkey, backup_ids_hash, num_of_backup_ids_needed)
# MOD_HASH MY_PUBKEY RECOVERY_DID_LIST_HASH NUM_VERIFICATIONS_REQUIRED
return DID_INNERPUZ_MOD.curry(pubkey, backup_ids_hash, num_of_backup_ids_needed)
def create_fullpuz(innerpuz, genesis_puzhash) -> Program:
mod_hash = DID_CORE_MOD.get_tree_hash()
return DID_CORE_MOD.curry(mod_hash, genesis_puzhash, innerpuz)
def create_fullpuz(innerpuz, genesis_id) -> Program:
mod_hash = SINGLETON_TOP_LAYER_MOD.get_tree_hash()
return SINGLETON_TOP_LAYER_MOD.curry(mod_hash, genesis_id, LAUNCHER_PUZZLE.get_tree_hash(), innerpuz)
def get_pubkey_from_innerpuz(innerpuz: Program) -> G1Element:
@ -41,7 +45,7 @@ def is_did_innerpuz(inner_f: Program):
def is_did_core(inner_f: Program):
return inner_f == DID_CORE_MOD
return inner_f == SINGLETON_TOP_LAYER_MOD
def uncurry_innerpuz(puzzle: Program) -> Optional[Tuple[Program, Program]]:
@ -56,7 +60,7 @@ def uncurry_innerpuz(puzzle: Program) -> Optional[Tuple[Program, Program]]:
if not is_did_innerpuz(inner_f):
return None
core_mod, pubkey, id_list, num_of_backup_ids_needed = list(args.as_iter())
pubkey, id_list, num_of_backup_ids_needed = list(args.as_iter())
return pubkey, id_list
@ -71,8 +75,8 @@ def get_innerpuzzle_from_puzzle(puzzle: Program) -> Optional[Program]:
return inner_puzzle
def create_recovery_message_puzzle(recovering_coin: bytes32, newpuz: bytes32, pubkey: G1Element):
puzstring = f"(q . ((0x{ConditionOpcode.CREATE_COIN_ANNOUNCEMENT.hex()} 0x{recovering_coin.hex()}) (0x{ConditionOpcode.AGG_SIG_UNSAFE.hex()} 0x{bytes(pubkey).hex()} 0x{newpuz.hex()})))" # noqa
def create_recovery_message_puzzle(recovering_coin_id: bytes32, newpuz: bytes32, pubkey: G1Element):
puzstring = f"(q . ((0x{ConditionOpcode.CREATE_COIN_ANNOUNCEMENT.hex()} 0x{recovering_coin_id.hex()}) (0x{ConditionOpcode.AGG_SIG_UNSAFE.hex()} 0x{bytes(pubkey).hex()} 0x{newpuz.hex()})))" # noqa
puz = binutils.assemble(puzstring)
return Program.to(puz)

View File

@ -8,7 +8,7 @@ from chia.util.streamable import Streamable, streamable
@dataclass(frozen=True)
@streamable
class CCParent(Streamable):
class LineageProof(Streamable):
parent_name: bytes32
inner_puzzle_hash: Optional[bytes32]
amount: uint64

View File

@ -0,0 +1,68 @@
(
;; The code below is used to calculate of the tree hash of a curried function
;; without actually doing the curry, and using other optimization tricks
;; like unrolling `sha256tree`.
(defconstant ONE 1)
(defconstant TWO 2)
(defconstant A_KW #a)
(defconstant Q_KW #q)
(defconstant C_KW #c)
;; Given the tree hash `environment-hash` of an environment tree E
;; and the tree hash `parameter-hash` of a constant parameter P
;; return the tree hash of the tree corresponding to
;; `(c (q . P) E)`
;; This is the new environment tree with the addition parameter P curried in.
;;
;; Note that `(c (q . P) E)` = `(c . ((q . P) . (E . 0)))`
(defun-inline update-hash-for-parameter-hash (parameter-hash environment-hash)
(sha256 TWO (sha256 ONE C_KW)
(sha256 TWO (sha256 TWO (sha256 ONE Q_KW) parameter-hash)
(sha256 TWO environment-hash (sha256 ONE 0))))
)
;; This function recursively calls `update-hash-for-parameter-hash`, updating `environment-hash`
;; along the way.
(defun build-curry-list (reversed-curry-parameter-hashes environment-hash)
(if reversed-curry-parameter-hashes
(build-curry-list (r reversed-curry-parameter-hashes)
(update-hash-for-parameter-hash (f reversed-curry-parameter-hashes) environment-hash))
environment-hash
)
)
;; Given the tree hash `environment-hash` of an environment tree E
;; and the tree hash `function-hash` of a function tree F
;; return the tree hash of the tree corresponding to
;; `(a (q . F) E)`
;; This is the hash of a new function that adopts the new environment E.
;; This is used to build of the tree hash of a curried function.
;;
;; Note that `(a (q . F) E)` = `(a . ((q . F) . (E . 0)))`
(defun-inline tree-hash-of-apply (function-hash environment-hash)
(sha256 TWO (sha256 ONE A_KW)
(sha256 TWO (sha256 TWO (sha256 ONE Q_KW) function-hash)
(sha256 TWO environment-hash (sha256 ONE 0))))
)
;; function-hash:
;; the hash of a puzzle function, ie. a `mod`
;;
;; reversed-curry-parameter-hashes:
;; a list of pre-hashed trees representing parameters to be curried into the puzzle.
;; Note that this must be applied in REVERSED order. This may seem strange, but it greatly simplifies
;; the underlying code, since we calculate the tree hash from the bottom nodes up, and the last
;; parameters curried must have their hashes calculated first.
;;
;; we return the hash of the curried expression
;; (a (q . function-hash) (c (cp1 (c cp2 (c ... 1)...))))
(defun puzzle-hash-of-curried-function (function-hash . reversed-curry-parameter-hashes)
(tree-hash-of-apply function-hash
(build-curry-list reversed-curry-parameter-hashes (sha256 ONE ONE)))
)
)

View File

@ -1,23 +1,25 @@
(mod (MOD_HASH MY_PUBKEY RECOVERY_DID_LIST_HASH NUM_VERIFICATIONS_REQUIRED mode amount message my_id my_puzhash parent_innerpuzhash_amounts_for_recovery_ids pubkey recovery_list_reveal)
(mod
(
MY_PUBKEY
RECOVERY_DID_LIST_HASH
NUM_VERIFICATIONS_REQUIRED
Truths
mode
amount
message
new_inner_puzhash
parent_innerpuzhash_amounts_for_recovery_ids
pubkey
recovery_list_reveal
)
;message is the new puzzle in the recovery and standard spend cases
;MOD_HASH, MY_PUBKEY, RECOVERY_DID_LIST_HASH are curried into the puzzle
;EXAMPLE SOLUTION (0xcafef00d 0x12341234 0x923bf9a7856b19d335a65f12d68957d497e1f0c16c0e14baf6d120e60753a1ce 2 1 100 (q "source code") 0xdeadbeef 0xcafef00d ((0xdadadada 0xdad5dad5 200) () (0xfafafafa 0xfaf5faf5 200)) 0xfadeddab (0x22222222 0x33333333 0x44444444))
(include condition_codes.clvm)
(defmacro and ARGS
(if ARGS
(qq (if (unquote (f ARGS))
(unquote (c and (r ARGS)))
()
))
1)
)
(defmacro not (ARGS)
(qq (if (unquote ARGS) 0 1))
)
(include curry-and-treehash.clinc)
(include singleton_truths.clib)
(defun is-in-list (atom items)
;; returns 1 iff `atom` is in the list of `items`
@ -38,41 +40,6 @@
)
)
;; utility function used by `curry_args`
(defun fix_curry_args (items core)
(if items
(qq (c (q . (unquote (f items))) (unquote (fix_curry_args (r items) core))))
core
)
)
; (curry_args sum (list 50 60)) => returns a function that is like (sum 50 60 ...)
(defun curry_args (func list_of_args) (qq (a (q . (unquote func)) (unquote (fix_curry_args list_of_args (q . 1))))))
;; (curry sum 50 60) => returns a function that is like (sum 50 60 ...)
(defun curry (func . args) (curry_args func args))
;; hash a tree with escape values representing already-hashed subtrees
;; This optimization can be useful if you know the puzzle hash of a sub-expression.
;; You probably actually want to use `curry_and_hash` though.
(defun sha256tree_esc_list
(TREE LITERALS)
(if (l TREE)
(sha256 2 (sha256tree_esc_list (f TREE) LITERALS) (sha256tree_esc_list (r TREE) LITERALS))
(if (is-in-list TREE LITERALS)
TREE
(sha256 1 TREE)
)
)
)
;; hash a tree with escape values representing already-hashed subtrees
;; This optimization can be useful if you know the tree hash of a sub-expression.
(defun sha256tree_esc
(TREE . LITERAL)
(sha256tree_esc_list TREE LITERAL)
)
;recovery message module - gets values curried in to make the puzzle
;TODO - this should probably be imported
(defun make_message_puzzle (recovering_coin newpuz pubkey)
@ -83,32 +50,26 @@
(list ASSERT_COIN_ANNOUNCEMENT (sha256 (sha256 coin_id (sha256tree1 (make_message_puzzle my_id new_innerpuz pubkey))) my_id))
)
;; return the puzzle hash for a cc with the given `genesis-coin-checker-hash` & `inner-puzzle`
(defun-inline create_fullpuzhash (mod_hash mod_hash_hash genesis_id inner_puzzle_hash)
(sha256tree_esc (curry mod_hash mod_hash_hash genesis_id inner_puzzle_hash)
mod_hash
mod_hash_hash
inner_puzzle_hash)
(defun-inline create_coin_ID_for_recovery (MOD_HASH LAUNCHER_ID LAUNCHER_PUZZLE_HASH parent innerpuzhash amount)
(sha256 parent (calculate_full_puzzle_hash MOD_HASH LAUNCHER_ID LAUNCHER_PUZZLE_HASH innerpuzhash) amount)
)
(defun-inline create_coin_ID_for_recovery (mod_hash mod_hash_hash did parent innerpuzhash amount)
(sha256 parent (create_fullpuzhash mod_hash mod_hash_hash did innerpuzhash) amount)
)
(defmacro recreate_self (my_puzhash amount)
(qq (c CREATE_COIN (c (unquote my_puzhash) (c (unquote amount) ()))))
;; return the full puzzlehash for a singleton with the innerpuzzle curried in
; puzzle-hash-of-curried-function is imported from curry-and-treehash.clinc
(defun-inline calculate_full_puzzle_hash (MOD_HASH LAUNCHER_ID LAUNCHER_PUZZLE_HASH inner_puzzle_hash)
(puzzle-hash-of-curried-function MOD_HASH inner_puzzle_hash (sha256 1 LAUNCHER_PUZZLE_HASH) (sha256 1 LAUNCHER_ID) (sha256 1 MOD_HASH))
)
(defmacro create_new_coin (amount new_puz)
(qq (c CREATE_COIN (c (unquote new_puz) (c (unquote amount) ()))))
)
(defun check_messages_from_identities (mod_hash mod_hash_hash num_verifications_required identities my_id output new_puz parent_innerpuzhash_amounts_for_recovery_ids pubkey num_verifications)
(defun check_messages_from_identities (MOD_HASH LAUNCHER_PUZZLE_HASH num_verifications_required identities my_id output new_puz parent_innerpuzhash_amounts_for_recovery_ids pubkey num_verifications)
(if identities
(if (f parent_innerpuzhash_amounts_for_recovery_ids)
(check_messages_from_identities
mod_hash
mod_hash_hash
MOD_HASH
LAUNCHER_PUZZLE_HASH
num_verifications_required
(r identities)
my_id
@ -116,9 +77,9 @@
(create_consume_message
; create coin_id from DID
(create_coin_ID_for_recovery
mod_hash
mod_hash_hash
MOD_HASH
(f identities)
LAUNCHER_PUZZLE_HASH
(f (f parent_innerpuzhash_amounts_for_recovery_ids))
(f (r (f parent_innerpuzhash_amounts_for_recovery_ids)))
(f (r (r (f parent_innerpuzhash_amounts_for_recovery_ids)))))
@ -132,9 +93,8 @@
(+ num_verifications 1)
)
(check_messages_from_identities
mod_hash
mod_hash_hash
num_verifications_required
MOD_HASH
LAUNCHER_PUZZLE_HASH
(r identities)
my_id
output
@ -155,29 +115,36 @@
)
)
(defun create_messages (messages)
(if messages
(c (list CREATE_COIN (f messages) 0) (create_messages (r messages)))
()
)
)
;Spend modes:
;0 = normal spend
;1 = attest
;2 (or anything else) = recovery
;0 = exit spend
;1 = create messages and recreate singleton
;2 = recovery
;MAIN
(if mode
(if (= mode 1)
; mode one - create message
(list (recreate_self my_puzhash amount) (list CREATE_COIN message 0) (list AGG_SIG_ME MY_PUBKEY (sha256tree1 (list my_puzhash amount message))))
; mode one - create messages and recreate singleton
(c (list CREATE_COIN new_inner_puzhash amount) (c (list AGG_SIG_ME MY_PUBKEY (sha256tree1 (list new_inner_puzhash amount message))) (create_messages message)))
; mode two - recovery
; check that recovery list is not empty
(if recovery_list_reveal
(if (= (sha256tree1 recovery_list_reveal) RECOVERY_DID_LIST_HASH)
(check_messages_from_identities MOD_HASH (sha256tree1 MOD_HASH) NUM_VERIFICATIONS_REQUIRED recovery_list_reveal my_id (list (create_new_coin amount message)) message parent_innerpuzhash_amounts_for_recovery_ids pubkey 0)
(check_messages_from_identities (singleton_mod_hash_truth Truths) (singleton_launcher_puzzle_hash_truth Truths) NUM_VERIFICATIONS_REQUIRED recovery_list_reveal (my_id_truth Truths) (list (create_new_coin amount message)) message parent_innerpuzhash_amounts_for_recovery_ids pubkey 0)
(x)
)
(x)
)
)
; mode zero - normal spend
(list (create_new_coin amount message) (list AGG_SIG_ME MY_PUBKEY (sha256tree1 (list amount message))))
; mode zero - exit spend
(list (list CREATE_COIN 0x00 -113) (list CREATE_COIN message amount) (list AGG_SIG_ME MY_PUBKEY (sha256tree1 (list amount message))))
)
)

View File

@ -1 +1 @@
ff02ffff01ff02ffff03ff5fffff01ff02ffff03ffff09ff5fffff010180ffff01ff04ffff04ff24ffff04ff8205ffffff04ff81bfff80808080ffff04ffff04ff24ffff04ff82017fffff01ff80808080ffff04ffff04ff10ffff04ff0bffff04ffff02ff36ffff04ff02ffff04ffff04ff8205ffffff04ff81bfffff04ff82017fff80808080ff80808080ff80808080ff80808080ffff01ff02ffff03ff822fffffff01ff02ffff03ffff09ffff02ff36ffff04ff02ffff04ff822fffff80808080ff1780ffff01ff02ff2cffff04ff02ffff04ff05ffff04ffff02ff36ffff04ff02ffff04ff05ff80808080ffff04ff2fffff04ff822fffffff04ff8202ffffff04ffff04ffff04ff24ffff04ff82017fffff04ff81bfff80808080ff8080ffff04ff82017fffff04ff820bffffff04ff8217ffffff01ff80808080808080808080808080ffff01ff088080ff0180ffff01ff088080ff018080ff0180ffff01ff04ffff04ff24ffff04ff82017fffff04ff81bfff80808080ffff04ffff04ff10ffff04ff0bffff04ffff02ff36ffff04ff02ffff04ffff04ff81bfffff04ff82017fff808080ff80808080ff80808080ff80808080ff0180ffff04ffff01ffffff32ff313dffff333cffff02ffff03ff2fffff01ff02ffff03ff8204ffffff01ff02ff2cffff04ff02ffff04ff05ffff04ff0bffff04ff17ffff04ff6fffff04ff5fffff04ffff04ffff04ff38ffff04ffff0bffff0bffff0bff8208ffffff02ff2effff04ff02ffff04ffff02ff3cffff04ff02ffff04ff05ffff04ff0bffff04ff4fffff04ff8214ffff80808080808080ffff04ff05ffff04ff0bffff04ff8214ffff80808080808080ff822cff80ffff02ff36ffff04ff02ffff04ffff02ff26ffff04ff02ffff04ff5fffff04ff82017fffff04ff8205ffff808080808080ff8080808080ff5f80ff808080ff81bf80ffff04ff82017fffff04ff8206ffffff04ff8205ffffff04ffff10ff820bffffff010180ff80808080808080808080808080ffff01ff02ff2cffff04ff02ffff04ff05ffff04ff0bffff04ff17ffff04ff6fffff04ff5fffff04ff81bfffff04ff82017fffff04ff8206ffffff04ff8205ffffff04ff820bffff8080808080808080808080808080ff0180ffff01ff02ffff03ffff15ff820bffff1780ffff01ff04ffff04ff28ffff04ff8205ffffff04ff82017fff80808080ff81bf80ffff01ff02ffff03ffff09ff820bffff1780ffff01ff04ffff04ff28ffff04ff8205ffffff04ff82017fff80808080ff81bf80ffff01ff08ffff01986e6f7420656e6f75676820766572696669636174696f6e738080ff018080ff018080ff0180ff02ff12ffff04ff02ffff04ff05ffff04ff07ff8080808080ffffff04ffff0102ffff04ffff04ffff0101ff0580ffff04ffff02ff2affff04ff02ffff04ff0bffff01ff0180808080ff80808080ffff02ffff03ff05ffff01ff04ffff0104ffff04ffff04ffff0101ff0980ffff04ffff02ff2affff04ff02ffff04ff0dffff04ff0bff8080808080ff80808080ffff010b80ff0180ff02ffff03ff0bffff01ff02ffff03ffff09ff05ff1380ffff01ff0101ffff01ff02ff3affff04ff02ffff04ff05ffff04ff1bff808080808080ff0180ff8080ff0180ffffff04ffff0101ffff04ffff04ff34ffff04ff05ff808080ffff04ffff04ff28ffff04ff17ffff04ff0bff80808080ff80808080ff02ffff03ffff07ff0580ffff01ff0bffff0102ffff02ff36ffff04ff02ffff04ff09ff80808080ffff02ff36ffff04ff02ffff04ff0dff8080808080ffff01ff0bffff0101ff058080ff0180ffff02ff3effff04ff02ffff04ff05ffff04ff07ff8080808080ff02ffff03ffff07ff0580ffff01ff0bffff0102ffff02ff3effff04ff02ffff04ff09ffff04ff0bff8080808080ffff02ff3effff04ff02ffff04ff0dffff04ff0bff808080808080ffff01ff02ffff03ffff02ff3affff04ff02ffff04ff05ffff04ff0bff8080808080ffff0105ffff01ff0bffff0101ff058080ff018080ff0180ff018080
ff02ffff01ff02ffff03ff5fffff01ff02ffff03ffff09ff5fffff010180ffff01ff04ffff04ff24ffff04ff8202ffffff04ff81bfff80808080ffff04ffff04ff20ffff04ff05ffff04ffff02ff3effff04ff02ffff04ffff04ff8202ffffff04ff81bfffff04ff82017fff80808080ff80808080ff80808080ffff02ff26ffff04ff02ffff04ff82017fff808080808080ffff01ff02ffff03ff8217ffffff01ff02ffff03ffff09ffff02ff3effff04ff02ffff04ff8217ffff80808080ff0b80ffff01ff02ff3affff04ff02ffff04ff8202efffff04ff820befffff04ff17ffff04ff8217ffffff04ff818fffff04ffff04ffff04ff24ffff04ff82017fffff04ff81bfff80808080ff8080ffff04ff82017fffff04ff8205ffffff04ff820bffffff01ff80808080808080808080808080ffff01ff088080ff0180ffff01ff088080ff018080ff0180ffff01ff04ffff04ff24ffff01ff00ff818f8080ffff04ffff04ff24ffff04ff82017fffff04ff81bfff80808080ffff04ffff04ff20ffff04ff05ffff04ffff02ff3effff04ff02ffff04ffff04ff81bfffff04ff82017fff808080ff80808080ff80808080ff8080808080ff0180ffff04ffff01ffffffff3231ff3d02ffff333cff0401ffffff0102ffff02ffff03ff05ffff01ff02ff2affff04ff02ffff04ff0dffff04ffff0bff32ffff0bff3cff2c80ffff0bff32ffff0bff32ffff0bff3cff2280ff0980ffff0bff32ff0bffff0bff3cff8080808080ff8080808080ffff010b80ff0180ff02ffff03ff2fffff01ff02ffff03ff8204ffffff01ff02ff3affff04ff02ffff04ff05ffff04ff0bffff04ff17ffff04ff6fffff04ff5fffff04ffff04ffff04ff28ffff04ffff0bffff0bffff0bff8208ffffff02ff2effff04ff02ffff04ff05ffff04ff8214ffffff04ffff0bffff0101ff0b80ffff04ffff0bffff0101ff4f80ffff04ffff0bffff0101ff0580ff8080808080808080ff822cff80ffff02ff3effff04ff02ffff04ffff02ff36ffff04ff02ffff04ff5fffff04ff82017fffff04ff8205ffff808080808080ff8080808080ff5f80ff808080ff81bf80ffff04ff82017fffff04ff8206ffffff04ff8205ffffff04ffff10ff820bffffff010180ff80808080808080808080808080ffff01ff02ff3affff04ff02ffff04ff05ffff04ff0bffff04ff6fffff04ff5fffff04ff81bfffff04ff82017fffff04ff8206ffffff04ff8205ffffff04ff820bffff80808080808080808080808080ff0180ffff01ff02ffff03ffff15ff820bffff1780ffff01ff04ffff04ff30ffff04ff8205ffffff04ff82017fff80808080ff81bf80ffff01ff02ffff03ffff09ff820bffff1780ffff01ff04ffff04ff30ffff04ff8205ffffff04ff82017fff80808080ff81bf80ffff01ff08ffff01986e6f7420656e6f75676820766572696669636174696f6e738080ff018080ff018080ff0180ffffff02ffff03ff05ffff01ff04ffff04ff24ffff04ff09ffff01ff80808080ffff02ff26ffff04ff02ffff04ff0dff8080808080ff8080ff0180ff04ffff0101ffff04ffff04ff34ffff04ff05ff808080ffff04ffff04ff30ffff04ff17ffff04ff0bff80808080ff80808080ffff0bff32ffff0bff3cff3880ffff0bff32ffff0bff32ffff0bff3cff2280ff0580ffff0bff32ffff02ff2affff04ff02ffff04ff07ffff04ffff0bff3cff3c80ff8080808080ffff0bff3cff8080808080ff02ffff03ffff07ff0580ffff01ff0bffff0102ffff02ff3effff04ff02ffff04ff09ff80808080ffff02ff3effff04ff02ffff04ff0dff8080808080ffff01ff0bffff0101ff058080ff0180ff018080

View File

@ -1 +1 @@
2f6e9a0237d200ac3b989d7c825ab68d732db6a2d1c8018e9f79d4be329eeed0
f2356bc00a27abf46c72b809ba7d1d53bde533d94a7a3da8954155afe54304c4

View File

@ -0,0 +1,38 @@
(mod (SINGLETON_MOD_HASH LAUNCHER_ID LAUNCHER_PUZZLE_HASH singleton_inner_puzzle_hash my_id)
; SINGLETON_MOD_HASH is the mod-hash for the singleton_top_layer puzzle
; LAUNCHER_ID is the ID of the singleton we are commited to paying to
; LAUNCHER_PUZZLE_HASH is the puzzle hash of the launcher
; singleton_inner_puzzle_hash is the innerpuzzlehash for our singleton at the current time
; my_id is the coin_id of the coin that this puzzle is locked into
(include condition_codes.clvm)
(include curry-and-treehash.clinc)
; takes a lisp tree and returns the hash of it
(defun sha256tree (TREE)
(if (l TREE)
(sha256 2 (sha256tree (f TREE)) (sha256tree (r TREE)))
(sha256 1 TREE)
)
)
;; return the full puzzlehash for a singleton with the innerpuzzle curried in
; puzzle-hash-of-curried-function is imported from curry-and-treehash.clinc
(defun-inline calculate_full_puzzle_hash (SINGLETON_MOD_HASH LAUNCHER_ID LAUNCHER_PUZZLE_HASH inner_puzzle_hash)
(puzzle-hash-of-curried-function SINGLETON_MOD_HASH
inner_puzzle_hash
(sha256tree (c SINGLETON_MOD_HASH (c LAUNCHER_ID LAUNCHER_PUZZLE_HASH)))
)
)
(defun-inline claim_rewards (SINGLETON_MOD_HASH LAUNCHER_ID LAUNCHER_PUZZLE_HASH singleton_inner_puzzle_hash my_id)
(list
(list ASSERT_PUZZLE_ANNOUNCEMENT (sha256 (calculate_full_puzzle_hash SINGLETON_MOD_HASH LAUNCHER_ID LAUNCHER_PUZZLE_HASH singleton_inner_puzzle_hash) my_id))
(list CREATE_COIN_ANNOUNCEMENT '$')
(list ASSERT_MY_COIN_ID my_id))
)
; main
(claim_rewards SINGLETON_MOD_HASH LAUNCHER_ID LAUNCHER_PUZZLE_HASH singleton_inner_puzzle_hash my_id)
)

View File

@ -0,0 +1 @@
ff02ffff01ff04ffff04ff18ffff04ffff0bffff02ff2effff04ff02ffff04ff05ffff04ff2fffff04ffff02ff3effff04ff02ffff04ffff04ff05ffff04ff0bff178080ff80808080ff808080808080ff5f80ff808080ffff04ffff04ff2cffff01ff248080ffff04ffff04ff10ffff04ff5fff808080ff80808080ffff04ffff01ffffff463fff02ff3c04ffff01ff0102ffff02ffff03ff05ffff01ff02ff16ffff04ff02ffff04ff0dffff04ffff0bff3affff0bff12ff3c80ffff0bff3affff0bff3affff0bff12ff2a80ff0980ffff0bff3aff0bffff0bff12ff8080808080ff8080808080ffff010b80ff0180ffff0bff3affff0bff12ff1480ffff0bff3affff0bff3affff0bff12ff2a80ff0580ffff0bff3affff02ff16ffff04ff02ffff04ff07ffff04ffff0bff12ff1280ff8080808080ffff0bff12ff8080808080ff02ffff03ffff07ff0580ffff01ff0bffff0102ffff02ff3effff04ff02ffff04ff09ff80808080ffff02ff3effff04ff02ffff04ff0dff8080808080ffff01ff0bffff0101ff058080ff0180ff018080

Some files were not shown because too many files have changed in this diff Show More