diff --git a/CHANGELOG.md b/CHANGELOG.md index 5eff3eb0964..467d281a4a0 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,178 @@ +# 0.0.117 - Oct 3, 2023 - "Everything but the Twelve Sinks" + +## API Updates + * `ProbabilisticScorer`'s internal models have been substantially improved, + including better decaying (#1789), a more granular historical channel + liquidity tracker (#2176) and a now-default option to make our estimate for a + channel's current liquidity nonlinear in the channel's capacity (#2547). In + total, these changes should result in improved payment success rates at the + cost of slightly worse routefinding performance. + * Support for custom TLVs for recipients of HTLCs has been added (#2308). + * Support for generating transactions for third-party watchtowers has been + added to `ChannelMonitor/Update`s (#2337). + * `KVStorePersister` has been replaced with a more generic and featureful + `KVStore` interface (#2472). + * A new `MonitorUpdatingPersister` is provided which wraps a `KVStore` and + implements `Persist` by writing differential updates rather than full + `ChannelMonitor`s (#2359). + * Batch funding of outbound channels is now supported using the new + `ChannelManager::batch_funding_transaction_generated` method (#2486). + * `ChannelManager::send_preflight_probes` has been added to probe a payment's + potential paths while a user is providing approval for a payment (#2534). + * Fully asynchronous `ChannelMonitor` updating is available as an alpha + preview. There remain a few known but incredibly rare race conditions which + may lead to loss of funds (#2112, #2169, #2562). + * `ChannelMonitorUpdateStatus::PermanentFailure` has been removed in favor of a + new `ChannelMonitorUpdateStatus::UnrecoverableError`. The new variant panics + on use, rather than force-closing a channel in an unsafe manner, which the + previous variant did (#2562). Rather than panicking with the new variant, + users may wish to use the new asynchronous `ChannelMonitor` updating using + `ChannelMonitorUpdateStatus::InProgress`. + * `RouteParameters::max_total_routing_fee_msat` was added to limit the fees + paid when routing, defaulting to 1% + 50sats when using the new + `from_payment_params_and_value` constructor (#2417, #2603, #2604). + * Implementations of `UtxoSource` are now provided in `lightning-block-sync`. + Those running with a full node should use this to validate gossip (#2248). + * `LockableScore` now supports read locking for parallel routefinding (#2197). + * `ChannelMonitor::get_spendable_outputs` was added to allow for re-generation + of `SpendableOutputDescriptor`s for a channel after they were provided via + `Event::SpendableOutputs` (#2609, #2624). + * `[u8; 32]` has been replaced with a `ChannelId` newtype for chan ids (#2485). + * `NetAddress` was renamed `SocketAddress` (#2549) and `FromStr` impl'd (#2134) + * For `no-std` users, `parse_onion_address` was added which creates a + `NetAddress` from a "...onion" string and port (#2134, #2633). + * HTLC information is now provided in `Event::PaymentClaimed::htlcs` (#2478). + * The success probability used in historical penalties when scoring is now + available via `historical_estimated_payment_success_probability` (#2466). + * `RecentPaymentDetails::*::payment_id` has been added (#2567). + * `Route` now contains a `RouteParameters` rather than a `PaymentParameters`, + tracking the original arguments passed to routefinding (#2555). + * `Balance::*::claimable_amount_satoshis` was renamed `amount_satoshis` (#2460) + * `*Features::set_*_feature_bit` have been added for non-custom flags (#2522). + * `channel_id` was added to `SpendableOutputs` events (#2511). + * `counterparty_node_id` and `channel_capacity_sats` were added to + `ChannelClosed` events (#2387). + * `ChannelMonitor` now implements `Clone` for `Clone`able signers (#2448). + * `create_onion_message` was added to build an onion message (#2583, #2595). + * `HTLCDescriptor` now implements `Writeable`/`Readable` (#2571). + * `SpendableOutputDescriptor` now implements `Hash` (#2602). + * `MonitorUpdateId` now implements `Debug` (#2594). + * `Payment{Hash,Id,Preimage}` now implement `Display` (#2492). + * `NodeSigner::sign_bolt12_invoice{,request}` were added for future use (#2432) + +## Backwards Compatibility + * Users migrating to the new `KVStore` can use a concatentation of + `[{primary_namespace}/[{secondary_namespace}/]]{key}` to build a key + compatible with the previous `KVStorePersister` interface (#2472). + * Downgrading after receipt of a payment with custom HTLC TLVs may result in + unintentionally accepting payments with TLVs you do not understand (#2308). + * `Route` objects (including pending payments) written by LDK versions prior + to 0.0.117 won't be retryable after being deserialized by LDK 0.0.117 or + above (#2555). + * Users of the `MonitorUpdatingPersister` can upgrade seamlessly from the + default `KVStore` `Persist` implementation, however the stored + `ChannelMonitor`s are deliberately unreadable by the default `Persist`. This + ensures the correct downgrade procedure is followed, which is: (#2359) + * First, make a backup copy of all channel state, + * then ensure all `ChannelMonitorUpdate`s stored are fully applied to the + relevant `ChannelMonitor`, + * finally, write each full `ChannelMonitor` using your new `Persist` impl. + +## Bug Fixes + * Anchor channels which were closed by a counterparty broadcasting its + commitment transaction (i.e. force-closing) would previously not generate a + `SpendableOutputs` event for our `to_remote` (i.e. non-HTLC-encumbered) + balance. Those with such balances available should fetch the missing + `SpendableOutputDescriptor`s using the new + `ChannelMonitor::get_spendable_outputs` method (#2605). + * Anchor channels may result in spurious or missing `Balance` entries for HTLC + balances (#2610). + * `ChannelManager::send_spontaneous_payment_with_retry` spuriously did not + provide the recipient with enough information to claim the payment, leading + to all spontaneous payments failing (#2475). + `send_spontaneous_payment_with_route` was unaffected. + * The `keysend` feature on node announcements was spuriously un-set in 0.0.112 + and has been re-enabled (#2465). + * Fixed several races which could lead to deadlock when force-closing a channel + (#2597). These races have not been seen in production. + * The `ChannelManager` is persisted substantially less when it has not changed, + leading to substantially less I/O traffic for it (#2521, #2617). + * Passing new block data to `ChainMonitor` no longer results in all other + monitor operations being blocked until it completes (#2528). + * When retrying payments, any excess amount sent to the recipient in order to + meet an `htlc_minimum` constraint on the path is now no longer included in + the amount we send in the retry (#2575). + * Several edge cases in route-finding around HTLC minimums were fixed which + could have caused invalid routes or panics when built with debug assertions + (#2570, #2575). + * Several edge cases in route-finding around HTLC minimums and route hints + were fixed which would spuriously result in no route found (#2575, #2604). + * The `user_channel_id` passed to `SignerProvider::generate_channel_keys_id` + for inbound channels is now correctly using the one passed to + `ChannelManager::accept_inbound_channel` rather than a default value (#2428). + * Users of `impl_writeable_tlv_based!` no longer have use requirements (#2506). + * No longer force-close channels when counterparties send a `channel_update` + with a bogus `htlc_minimum_msat`, which LND users can manually build (#2611). + +## Node Compatibility + * LDK now ignores `error` messages generated by LND in response to a + `shutdown` message, avoiding force-closes due to LND bug 6039. This may + lead to non-trivial bandwidth usage with LND peers exhibiting this bug + during the cooperative shutdown process (#2507). + +## Security +0.0.117 fixes several loss-of-funds vulnerabilities in anchor output channels, +support for which was added in 0.0.116, in reorg handling, and when accepting +channel(s) from counterparties which are miners. + * When a counterparty broadcasts their latest commitment transaction for a + channel with anchor outputs, we'd previously fail to build claiming + transactions against any HTLC outputs in that transaction. This could lead + to loss of funds if the counterparty is able to eventually claim the HTLC + after a timeout (#2606). + * Anchor channels HTLC claims on-chain previously spent the entire value of any + HTLCs as fee, which has now been fixed (#2587). + * If a channel is closed via an on-chain commitment transaction confirmation + with a pending outbound HTLC in the commitment transaction, followed by a + reorg which replaces the confirmed commitment transaction with a different + (but non-revoked) commitment transaction, all before we learn the payment + preimage for this HTLC, we may previously have not generated a proper + claiming transaction for the HTLC's value (#2623). + * 0.0.117 now correctly handles channels for which our counterparty funded the + channel with a coinbase transaction. As such transactions are not spendable + until they've reached 100 confirmations, this could have resulted in + accepting HTLC(s) which are not enforcible on-chain (#1924). + +In total, this release features 121 files changed, 20477 insertions, 8184 +deletions in 381 commits from 27 authors, in alphabetical order: + * Alec Chen + * Allan Douglas R. de Oliveira + * Antonio Yang + * Arik Sosman + * Chris Waterson + * David Caseria + * DhananjayPurohit + * Dom Zippilli + * Duncan Dean + * Elias Rohrer + * Erik De Smedt + * Evan Feenstra + * Gabor Szabo + * Gursharan Singh + * Jeffrey Czyz + * Joseph Goulden + * Lalitmohansharma1 + * Matt Corallo + * Rachel Malonson + * Sergi Delgado Segura + * Valentine Wallace + * Vladimir Fomene + * Willem Van Lint + * Wilmer Paulino + * benthecarman + * jbesraa + * optout + + # 0.0.116 - Jul 21, 2023 - "Anchoring the Roadmap" ## API Updates diff --git a/fuzz/src/chanmon_consistency.rs b/fuzz/src/chanmon_consistency.rs index 6a2007165d7..2a1b9e9a70a 100644 --- a/fuzz/src/chanmon_consistency.rs +++ b/fuzz/src/chanmon_consistency.rs @@ -155,7 +155,7 @@ impl chain::Watch for TestChainMonitor { }; let deserialized_monitor = <(BlockHash, channelmonitor::ChannelMonitor)>:: read(&mut Cursor::new(&map_entry.get().1), (&*self.keys, &*self.keys)).unwrap().1; - deserialized_monitor.update_monitor(update, &&TestBroadcaster{}, &FuzzEstimator { ret_val: atomic::AtomicU32::new(253) }, &self.logger).unwrap(); + deserialized_monitor.update_monitor(update, &&TestBroadcaster{}, &&FuzzEstimator { ret_val: atomic::AtomicU32::new(253) }, &self.logger).unwrap(); let mut ser = VecWriter(Vec::new()); deserialized_monitor.write(&mut ser).unwrap(); map_entry.insert((update.update_id, ser.0)); diff --git a/lightning-background-processor/Cargo.toml b/lightning-background-processor/Cargo.toml index 1953111b50b..b0bfff38f02 100644 --- a/lightning-background-processor/Cargo.toml +++ b/lightning-background-processor/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-background-processor" -version = "0.0.117-rc1" +version = "0.0.117" authors = ["Valentine Wallace "] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -22,11 +22,11 @@ default = ["std"] [dependencies] bitcoin = { version = "0.29.0", default-features = false } -lightning = { version = "0.0.117-rc1", path = "../lightning", default-features = false } -lightning-rapid-gossip-sync = { version = "0.0.117-rc1", path = "../lightning-rapid-gossip-sync", default-features = false } +lightning = { version = "0.0.117", path = "../lightning", default-features = false } +lightning-rapid-gossip-sync = { version = "0.0.117", path = "../lightning-rapid-gossip-sync", default-features = false } [dev-dependencies] tokio = { version = "1.14", features = [ "macros", "rt", "rt-multi-thread", "sync", "time" ] } -lightning = { version = "0.0.117-rc1", path = "../lightning", features = ["_test_utils"] } -lightning-invoice = { version = "0.25.0-rc1", path = "../lightning-invoice" } -lightning-persister = { version = "0.0.117-rc1", path = "../lightning-persister" } +lightning = { version = "0.0.117", path = "../lightning", features = ["_test_utils"] } +lightning-invoice = { version = "0.25.0", path = "../lightning-invoice" } +lightning-persister = { version = "0.0.117", path = "../lightning-persister" } diff --git a/lightning-background-processor/src/lib.rs b/lightning-background-processor/src/lib.rs index efa1a42142c..2682e3824b6 100644 --- a/lightning-background-processor/src/lib.rs +++ b/lightning-background-processor/src/lib.rs @@ -501,7 +501,7 @@ use core::task; /// could setup `process_events_async` like this: /// ``` /// # use lightning::io; -/// # use std::sync::{Arc, Mutex}; +/// # use std::sync::{Arc, RwLock}; /// # use std::sync::atomic::{AtomicBool, Ordering}; /// # use lightning_background_processor::{process_events_async, GossipSync}; /// # struct MyStore {} @@ -528,11 +528,11 @@ use core::task; /// # type MyFilter = dyn lightning::chain::Filter + Send + Sync; /// # type MyLogger = dyn lightning::util::logger::Logger + Send + Sync; /// # type MyChainMonitor = lightning::chain::chainmonitor::ChainMonitor, Arc, Arc, Arc, Arc>; -/// # type MyPeerManager = lightning::ln::peer_handler::SimpleArcPeerManager; +/// # type MyPeerManager = lightning::ln::peer_handler::SimpleArcPeerManager, MyLogger>; /// # type MyNetworkGraph = lightning::routing::gossip::NetworkGraph>; /// # type MyGossipSync = lightning::routing::gossip::P2PGossipSync, Arc, Arc>; /// # type MyChannelManager = lightning::ln::channelmanager::SimpleArcChannelManager; -/// # type MyScorer = Mutex, Arc>>; +/// # type MyScorer = RwLock, Arc>>; /// /// # async fn setup_background_processing(my_persister: Arc, my_event_handler: Arc, my_chain_monitor: Arc, my_channel_manager: Arc, my_gossip_sync: Arc, my_logger: Arc, my_scorer: Arc, my_peer_manager: Arc) { /// let background_persister = Arc::clone(&my_persister); @@ -1181,7 +1181,7 @@ mod tests { let network_graph = Arc::new(NetworkGraph::new(network, logger.clone())); let scorer = Arc::new(Mutex::new(TestScorer::new())); let seed = [i as u8; 32]; - let router = Arc::new(DefaultRouter::new(network_graph.clone(), logger.clone(), seed, scorer.clone(), ())); + let router = Arc::new(DefaultRouter::new(network_graph.clone(), logger.clone(), seed, scorer.clone(), Default::default())); let chain_source = Arc::new(test_utils::TestChainSource::new(Network::Bitcoin)); let kv_store = Arc::new(FilesystemStore::new(format!("{}_persister_{}", &persist_dir, i).into())); let now = Duration::from_secs(genesis_block.header.time as u64); diff --git a/lightning-block-sync/Cargo.toml b/lightning-block-sync/Cargo.toml index 9cb563ee425..be066c8577f 100644 --- a/lightning-block-sync/Cargo.toml +++ b/lightning-block-sync/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-block-sync" -version = "0.0.117-rc1" +version = "0.0.117" authors = ["Jeffrey Czyz", "Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -19,11 +19,11 @@ rpc-client = [ "serde_json", "chunked_transfer" ] [dependencies] bitcoin = "0.29.0" -lightning = { version = "0.0.117-rc1", path = "../lightning" } +lightning = { version = "0.0.117", path = "../lightning" } tokio = { version = "1.0", features = [ "io-util", "net", "time" ], optional = true } serde_json = { version = "1.0", optional = true } chunked_transfer = { version = "1.4", optional = true } [dev-dependencies] -lightning = { version = "0.0.117-rc1", path = "../lightning", features = ["_test_utils"] } +lightning = { version = "0.0.117", path = "../lightning", features = ["_test_utils"] } tokio = { version = "1.14", features = [ "macros", "rt" ] } diff --git a/lightning-custom-message/Cargo.toml b/lightning-custom-message/Cargo.toml index 9e0b2998bbd..9283d804d9f 100644 --- a/lightning-custom-message/Cargo.toml +++ b/lightning-custom-message/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-custom-message" -version = "0.0.117-rc1" +version = "0.0.117" authors = ["Jeffrey Czyz"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -15,4 +15,4 @@ rustdoc-args = ["--cfg", "docsrs"] [dependencies] bitcoin = "0.29.0" -lightning = { version = "0.0.117-rc1", path = "../lightning" } +lightning = { version = "0.0.117", path = "../lightning" } diff --git a/lightning-invoice/Cargo.toml b/lightning-invoice/Cargo.toml index eb010c867bd..472674c1311 100644 --- a/lightning-invoice/Cargo.toml +++ b/lightning-invoice/Cargo.toml @@ -1,7 +1,7 @@ [package] name = "lightning-invoice" description = "Data structures to parse and serialize BOLT11 lightning invoices" -version = "0.25.0-rc1" +version = "0.25.0" authors = ["Sebastian Geisler "] documentation = "https://docs.rs/lightning-invoice/" license = "MIT OR Apache-2.0" @@ -21,7 +21,7 @@ std = ["bitcoin_hashes/std", "num-traits/std", "lightning/std", "bech32/std"] [dependencies] bech32 = { version = "0.9.0", default-features = false } -lightning = { version = "0.0.117-rc1", path = "../lightning", default-features = false } +lightning = { version = "0.0.117", path = "../lightning", default-features = false } secp256k1 = { version = "0.24.0", default-features = false, features = ["recovery", "alloc"] } num-traits = { version = "0.2.8", default-features = false } bitcoin_hashes = { version = "0.11", default-features = false } @@ -30,6 +30,6 @@ serde = { version = "1.0.118", optional = true } bitcoin = { version = "0.29.0", default-features = false } [dev-dependencies] -lightning = { version = "0.0.117-rc1", path = "../lightning", default-features = false, features = ["_test_utils"] } +lightning = { version = "0.0.117", path = "../lightning", default-features = false, features = ["_test_utils"] } hex = "0.4" serde_json = { version = "1"} diff --git a/lightning-invoice/src/payment.rs b/lightning-invoice/src/payment.rs index 08be3d54ba5..0247913634a 100644 --- a/lightning-invoice/src/payment.rs +++ b/lightning-invoice/src/payment.rs @@ -9,7 +9,8 @@ //! Convenient utilities for paying Lightning invoices. -use crate::{Bolt11Invoice, Vec}; +use crate::Bolt11Invoice; +use crate::prelude::*; use bitcoin_hashes::Hash; diff --git a/lightning-net-tokio/Cargo.toml b/lightning-net-tokio/Cargo.toml index d0fab67c612..8a4ebd5d950 100644 --- a/lightning-net-tokio/Cargo.toml +++ b/lightning-net-tokio/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-net-tokio" -version = "0.0.117-rc1" +version = "0.0.117" authors = ["Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning/" @@ -16,9 +16,9 @@ rustdoc-args = ["--cfg", "docsrs"] [dependencies] bitcoin = "0.29.0" -lightning = { version = "0.0.117-rc1", path = "../lightning" } +lightning = { version = "0.0.117", path = "../lightning" } tokio = { version = "1.0", features = [ "rt", "sync", "net", "time" ] } [dev-dependencies] tokio = { version = "1.14", features = [ "macros", "rt", "rt-multi-thread", "sync", "net", "time" ] } -lightning = { version = "0.0.117-rc1", path = "../lightning", features = ["_test_utils"] } +lightning = { version = "0.0.117", path = "../lightning", features = ["_test_utils"] } diff --git a/lightning-persister/Cargo.toml b/lightning-persister/Cargo.toml index 772bfd9a12e..1a5ce6b7b4d 100644 --- a/lightning-persister/Cargo.toml +++ b/lightning-persister/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-persister" -version = "0.0.117-rc1" +version = "0.0.117" authors = ["Valentine Wallace", "Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -15,7 +15,7 @@ rustdoc-args = ["--cfg", "docsrs"] [dependencies] bitcoin = "0.29.0" -lightning = { version = "0.0.117-rc1", path = "../lightning" } +lightning = { version = "0.0.117", path = "../lightning" } [target.'cfg(windows)'.dependencies] windows-sys = { version = "0.48.0", default-features = false, features = ["Win32_Storage_FileSystem", "Win32_Foundation"] } @@ -24,5 +24,5 @@ windows-sys = { version = "0.48.0", default-features = false, features = ["Win32 criterion = { version = "0.4", optional = true, default-features = false } [dev-dependencies] -lightning = { version = "0.0.117-rc1", path = "../lightning", features = ["_test_utils"] } +lightning = { version = "0.0.117", path = "../lightning", features = ["_test_utils"] } bitcoin = { version = "0.29.0", default-features = false } diff --git a/lightning-rapid-gossip-sync/Cargo.toml b/lightning-rapid-gossip-sync/Cargo.toml index dafedc63d34..8c04f3a0d31 100644 --- a/lightning-rapid-gossip-sync/Cargo.toml +++ b/lightning-rapid-gossip-sync/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-rapid-gossip-sync" -version = "0.0.117-rc1" +version = "0.0.117" authors = ["Arik Sosman "] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -15,11 +15,11 @@ no-std = ["lightning/no-std"] std = ["lightning/std"] [dependencies] -lightning = { version = "0.0.117-rc1", path = "../lightning", default-features = false } +lightning = { version = "0.0.117", path = "../lightning", default-features = false } bitcoin = { version = "0.29.0", default-features = false } [target.'cfg(ldk_bench)'.dependencies] criterion = { version = "0.4", optional = true, default-features = false } [dev-dependencies] -lightning = { version = "0.0.117-rc1", path = "../lightning", features = ["_test_utils"] } +lightning = { version = "0.0.117", path = "../lightning", features = ["_test_utils"] } diff --git a/lightning-transaction-sync/Cargo.toml b/lightning-transaction-sync/Cargo.toml index 499984a3e92..02c94a3eb57 100644 --- a/lightning-transaction-sync/Cargo.toml +++ b/lightning-transaction-sync/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning-transaction-sync" -version = "0.0.117-rc1" +version = "0.0.117" authors = ["Elias Rohrer"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning" @@ -21,7 +21,7 @@ esplora-blocking = ["esplora-client/blocking"] async-interface = [] [dependencies] -lightning = { version = "0.0.117-rc1", path = "../lightning", default-features = false } +lightning = { version = "0.0.117", path = "../lightning", default-features = false } bitcoin = { version = "0.29.0", default-features = false } bdk-macros = "0.6" futures = { version = "0.3", optional = true } @@ -29,7 +29,7 @@ esplora-client = { version = "0.4", default-features = false, optional = true } reqwest = { version = "0.11", optional = true, default-features = false, features = ["json"] } [dev-dependencies] -lightning = { version = "0.0.117-rc1", path = "../lightning", features = ["std"] } +lightning = { version = "0.0.117", path = "../lightning", features = ["std"] } electrsd = { version = "0.22.0", features = ["legacy", "esplora_a33e97e1", "bitcoind_23_0"] } electrum-client = "0.12.0" tokio = { version = "1.14.0", features = ["full"] } diff --git a/lightning/Cargo.toml b/lightning/Cargo.toml index 09f88018f07..77d1fdba377 100644 --- a/lightning/Cargo.toml +++ b/lightning/Cargo.toml @@ -1,6 +1,6 @@ [package] name = "lightning" -version = "0.0.117-rc1" +version = "0.0.117" authors = ["Matt Corallo"] license = "MIT OR Apache-2.0" repository = "https://github.com/lightningdevkit/rust-lightning/" diff --git a/lightning/src/chain/chainmonitor.rs b/lightning/src/chain/chainmonitor.rs index 261c0471ca4..e87d082d9a7 100644 --- a/lightning/src/chain/chainmonitor.rs +++ b/lightning/src/chain/chainmonitor.rs @@ -324,7 +324,6 @@ where C::Target: chain::Filter, if self.update_monitor_with_chain_data(header, best_height, txdata, &process, funding_outpoint, &monitor_state).is_err() { // Take the monitors lock for writing so that we poison it and any future // operations going forward fail immediately. - core::mem::drop(monitor_state); core::mem::drop(monitor_lock); let _poison = self.monitors.write().unwrap(); log_error!(self.logger, "{}", err_str); @@ -767,7 +766,7 @@ where C::Target: chain::Filter, Some(monitor_state) => { let monitor = &monitor_state.monitor; log_trace!(self.logger, "Updating ChannelMonitor for channel {}", log_funding_info!(monitor)); - let update_res = monitor.update_monitor(update, &self.broadcaster, &*self.fee_estimator, &self.logger); + let update_res = monitor.update_monitor(update, &self.broadcaster, &self.fee_estimator, &self.logger); let update_id = MonitorUpdateId::from_monitor_update(update); let mut pending_monitor_updates = monitor_state.pending_monitor_updates.lock().unwrap(); diff --git a/lightning/src/chain/channelmonitor.rs b/lightning/src/chain/channelmonitor.rs index 58938016273..3ee9c49f624 100644 --- a/lightning/src/chain/channelmonitor.rs +++ b/lightning/src/chain/channelmonitor.rs @@ -1313,7 +1313,7 @@ impl ChannelMonitor { &self, updates: &ChannelMonitorUpdate, broadcaster: &B, - fee_estimator: F, + fee_estimator: &F, logger: &L, ) -> Result<(), ()> where @@ -2617,7 +2617,7 @@ impl ChannelMonitorImpl { self.pending_monitor_events.push(MonitorEvent::HolderForceClosed(self.funding_info.0)); } - pub fn update_monitor(&mut self, updates: &ChannelMonitorUpdate, broadcaster: &B, fee_estimator: F, logger: &L) -> Result<(), ()> + pub fn update_monitor(&mut self, updates: &ChannelMonitorUpdate, broadcaster: &B, fee_estimator: &F, logger: &L) -> Result<(), ()> where B::Target: BroadcasterInterface, F::Target: FeeEstimator, L::Target: Logger, @@ -2657,7 +2657,7 @@ impl ChannelMonitorImpl { panic!("Attempted to apply ChannelMonitorUpdates out of order, check the update_id before passing an update to update_monitor!"); } let mut ret = Ok(()); - let bounded_fee_estimator = LowerBoundedFeeEstimator::new(&*fee_estimator); + let bounded_fee_estimator = LowerBoundedFeeEstimator::new(&**fee_estimator); for update in updates.updates.iter() { match update { ChannelMonitorUpdateStep::LatestHolderCommitmentTXInfo { commitment_tx, htlc_outputs, claimed_htlcs, nondust_htlc_sources } => { @@ -4586,7 +4586,7 @@ mod tests { let broadcaster = TestBroadcaster::with_blocks(Arc::clone(&nodes[1].blocks)); assert!( - pre_update_monitor.update_monitor(&replay_update, &&broadcaster, &chanmon_cfgs[1].fee_estimator, &nodes[1].logger) + pre_update_monitor.update_monitor(&replay_update, &&broadcaster, &&chanmon_cfgs[1].fee_estimator, &nodes[1].logger) .is_err()); // Even though we error'd on the first update, we should still have generated an HTLC claim // transaction diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs index a4e6f3d9132..a6548e75edc 100644 --- a/lightning/src/ln/channelmanager.rs +++ b/lightning/src/ln/channelmanager.rs @@ -803,7 +803,7 @@ pub type SimpleArcChannelManager = ChannelManager< Arc>>, Arc, - Arc>>, Arc>>>, + Arc>>, Arc>>>, ProbabilisticScoringFeeParameters, ProbabilisticScorer>>, Arc>, >>, @@ -832,7 +832,7 @@ pub type SimpleRefChannelManager<'a, 'b, 'c, 'd, 'e, 'f, 'g, 'h, M, T, F, L> = &'e DefaultRouter< &'f NetworkGraph<&'g L>, &'g L, - &'h Mutex, &'g L>>, + &'h RwLock, &'g L>>, ProbabilisticScoringFeeParameters, ProbabilisticScorer<&'f NetworkGraph<&'g L>, &'g L> >, @@ -840,6 +840,9 @@ pub type SimpleRefChannelManager<'a, 'b, 'c, 'd, 'e, 'f, 'g, 'h, M, T, F, L> = >; /// A trivial trait which describes any [`ChannelManager`]. +/// +/// This is not exported to bindings users as general cover traits aren't useful in other +/// languages. pub trait AChannelManager { /// A type implementing [`chain::Watch`]. type Watch: chain::Watch + ?Sized; diff --git a/lightning/src/ln/msgs.rs b/lightning/src/ln/msgs.rs index 368964c65bd..d89d96f570e 100644 --- a/lightning/src/ln/msgs.rs +++ b/lightning/src/ln/msgs.rs @@ -40,10 +40,12 @@ use crate::onion_message; use crate::sign::{NodeSigner, Recipient}; use crate::prelude::*; +#[cfg(feature = "std")] use core::convert::TryFrom; use core::fmt; use core::fmt::Debug; use core::ops::Deref; +#[cfg(feature = "std")] use core::str::FromStr; use crate::io::{self, Cursor, Read}; use crate::io_extras::read_to_end; @@ -1067,7 +1069,10 @@ impl From for SocketAddress { } } -fn parse_onion_address(host: &str, port: u16) -> Result { +/// Parses an OnionV3 host and port into a [`SocketAddress::OnionV3`]. +/// +/// The host part must end with ".onion". +pub fn parse_onion_address(host: &str, port: u16) -> Result { if host.ends_with(".onion") { let domain = &host[..host.len() - ".onion".len()]; if domain.len() != 56 { diff --git a/lightning/src/ln/outbound_payment.rs b/lightning/src/ln/outbound_payment.rs index 1642f28efc7..b806b138a59 100644 --- a/lightning/src/ln/outbound_payment.rs +++ b/lightning/src/ln/outbound_payment.rs @@ -594,10 +594,26 @@ impl RecipientOnionFields { /// Note that if this field is non-empty, it will contain strictly increasing TLVs, each /// represented by a `(u64, Vec)` for its type number and serialized value respectively. /// This is validated when setting this field using [`Self::with_custom_tlvs`]. + #[cfg(not(c_bindings))] pub fn custom_tlvs(&self) -> &Vec<(u64, Vec)> { &self.custom_tlvs } + /// Gets the custom TLVs that will be sent or have been received. + /// + /// Custom TLVs allow sending extra application-specific data with a payment. They provide + /// additional flexibility on top of payment metadata, as while other implementations may + /// require `payment_metadata` to reflect metadata provided in an invoice, custom TLVs + /// do not have this restriction. + /// + /// Note that if this field is non-empty, it will contain strictly increasing TLVs, each + /// represented by a `(u64, Vec)` for its type number and serialized value respectively. + /// This is validated when setting this field using [`Self::with_custom_tlvs`]. + #[cfg(c_bindings)] + pub fn custom_tlvs(&self) -> Vec<(u64, Vec)> { + self.custom_tlvs.clone() + } + /// When we have received some HTLC(s) towards an MPP payment, as we receive further HTLC(s) we /// have to make sure that some fields match exactly across the parts. For those that aren't /// required to match, if they don't match we should remove them so as to not expose data diff --git a/lightning/src/ln/payment_tests.rs b/lightning/src/ln/payment_tests.rs index 72176760243..26ecbb0bd24 100644 --- a/lightning/src/ln/payment_tests.rs +++ b/lightning/src/ln/payment_tests.rs @@ -3781,7 +3781,7 @@ fn test_retry_custom_tlvs() { payment_hash, Some(payment_secret), events.pop().unwrap(), true, None).unwrap(); match payment_claimable { Event::PaymentClaimable { onion_fields, .. } => { - assert_eq!(onion_fields.unwrap().custom_tlvs(), &custom_tlvs); + assert_eq!(&onion_fields.unwrap().custom_tlvs()[..], &custom_tlvs[..]); }, _ => panic!("Unexpected event"), }; diff --git a/lightning/src/ln/peer_handler.rs b/lightning/src/ln/peer_handler.rs index 425dc94ffe0..4970056f232 100644 --- a/lightning/src/ln/peer_handler.rs +++ b/lightning/src/ln/peer_handler.rs @@ -627,7 +627,7 @@ impl Peer { pub type SimpleArcPeerManager = PeerManager< SD, Arc>, - Arc>>, Arc, Arc>>, + Arc>>, C, Arc>>, Arc>, Arc, IgnoringMessageHandler, @@ -643,13 +643,13 @@ pub type SimpleArcPeerManager = PeerManager< /// /// This is not exported to bindings users as general type aliases don't make sense in bindings. pub type SimpleRefPeerManager< - 'a, 'b, 'c, 'd, 'e, 'f, 'g, 'h, 'i, 'j, 'k, 'l, 'm, 'n, SD, M, T, F, C, L + 'a, 'b, 'c, 'd, 'e, 'f, 'logger, 'h, 'i, 'j, 'graph, SD, M, T, F, C, L > = PeerManager< SD, - &'n SimpleRefChannelManager<'a, 'b, 'c, 'd, 'e, 'f, 'g, 'm, M, T, F, L>, - &'f P2PGossipSync<&'g NetworkGraph<&'f L>, &'h C, &'f L>, - &'i SimpleRefOnionMessenger<'g, 'm, 'n, L>, - &'f L, + &'j SimpleRefChannelManager<'a, 'b, 'c, 'd, 'e, 'graph, 'logger, 'i, M, T, F, L>, + &'f P2PGossipSync<&'graph NetworkGraph<&'logger L>, C, &'logger L>, + &'h SimpleRefOnionMessenger<'logger, 'i, 'j, L>, + &'logger L, IgnoringMessageHandler, &'c KeysManager >; diff --git a/lightning/src/offers/offer.rs b/lightning/src/offers/offer.rs index e0bc63e8b2b..60621b9dc74 100644 --- a/lightning/src/offers/offer.rs +++ b/lightning/src/offers/offer.rs @@ -366,7 +366,7 @@ macro_rules! offer_accessors { ($self: ident, $contents: expr) => { /// The chains that may be used when paying a requested invoice (e.g., bitcoin mainnet). /// Payments must be denominated in units of the minimal lightning-payable unit (e.g., msats) /// for the selected chain. - pub fn chains(&$self) -> Vec<$crate::bitcoin::blockdata::constants::ChainHash> { + pub fn chains(&$self) -> Vec { $contents.chains() } @@ -418,7 +418,7 @@ macro_rules! offer_accessors { ($self: ident, $contents: expr) => { } /// The public key used by the recipient to sign invoices. - pub fn signing_pubkey(&$self) -> $crate::bitcoin::secp256k1::PublicKey { + pub fn signing_pubkey(&$self) -> bitcoin::secp256k1::PublicKey { $contents.signing_pubkey() } } } diff --git a/lightning/src/onion_message/messenger.rs b/lightning/src/onion_message/messenger.rs index 2a9c830d38b..03dc6a5b002 100644 --- a/lightning/src/onion_message/messenger.rs +++ b/lightning/src/onion_message/messenger.rs @@ -246,6 +246,64 @@ pub trait CustomOnionMessageHandler { fn read_custom_message(&self, message_type: u64, buffer: &mut R) -> Result, msgs::DecodeError>; } + +/// Create an onion message with contents `message` to the destination of `path`. +/// Returns (introduction_node_id, onion_msg) +pub fn create_onion_message( + entropy_source: &ES, node_signer: &NS, secp_ctx: &Secp256k1, + path: OnionMessagePath, message: OnionMessageContents, reply_path: Option, +) -> Result<(PublicKey, msgs::OnionMessage), SendError> +where + ES::Target: EntropySource, + NS::Target: NodeSigner, +{ + let OnionMessagePath { intermediate_nodes, mut destination } = path; + if let Destination::BlindedPath(BlindedPath { ref blinded_hops, .. }) = destination { + if blinded_hops.len() < 2 { + return Err(SendError::TooFewBlindedHops); + } + } + + if message.tlv_type() < 64 { return Err(SendError::InvalidMessage) } + + // If we are sending straight to a blinded path and we are the introduction node, we need to + // advance the blinded path by 1 hop so the second hop is the new introduction node. + if intermediate_nodes.len() == 0 { + if let Destination::BlindedPath(ref mut blinded_path) = destination { + let our_node_id = node_signer.get_node_id(Recipient::Node) + .map_err(|()| SendError::GetNodeIdFailed)?; + if blinded_path.introduction_node_id == our_node_id { + advance_path_by_one(blinded_path, node_signer, &secp_ctx) + .map_err(|()| SendError::BlindedPathAdvanceFailed)?; + } + } + } + + let blinding_secret_bytes = entropy_source.get_secure_random_bytes(); + let blinding_secret = SecretKey::from_slice(&blinding_secret_bytes[..]).expect("RNG is busted"); + let (introduction_node_id, blinding_point) = if intermediate_nodes.len() != 0 { + (intermediate_nodes[0], PublicKey::from_secret_key(&secp_ctx, &blinding_secret)) + } else { + match destination { + Destination::Node(pk) => (pk, PublicKey::from_secret_key(&secp_ctx, &blinding_secret)), + Destination::BlindedPath(BlindedPath { introduction_node_id, blinding_point, .. }) => + (introduction_node_id, blinding_point), + } + }; + let (packet_payloads, packet_keys) = packet_payloads_and_keys( + &secp_ctx, &intermediate_nodes, destination, message, reply_path, &blinding_secret) + .map_err(|e| SendError::Secp256k1(e))?; + + let prng_seed = entropy_source.get_secure_random_bytes(); + let onion_routing_packet = construct_onion_message_packet( + packet_payloads, packet_keys, prng_seed).map_err(|()| SendError::TooBigPacket)?; + + Ok((introduction_node_id, msgs::OnionMessage { + blinding_point, + onion_routing_packet + })) +} + impl OnionMessenger where @@ -283,13 +341,9 @@ where &self, path: OnionMessagePath, message: OnionMessageContents, reply_path: Option ) -> Result<(), SendError> { - let (introduction_node_id, onion_msg) = Self::create_onion_message( - &self.entropy_source, - &self.node_signer, - &self.secp_ctx, - path, - message, - reply_path + let (introduction_node_id, onion_msg) = create_onion_message( + &self.entropy_source, &self.node_signer, &self.secp_ctx, + path, message, reply_path )?; let mut pending_per_peer_msgs = self.pending_messages.lock().unwrap(); @@ -303,63 +357,6 @@ where } } - /// Create an onion message with contents `message` to the destination of `path`. - /// Returns (introduction_node_id, onion_msg) - pub fn create_onion_message( - entropy_source: &ES, - node_signer: &NS, - secp_ctx: &Secp256k1, - path: OnionMessagePath, - message: OnionMessageContents, - reply_path: Option, - ) -> Result<(PublicKey, msgs::OnionMessage), SendError> { - let OnionMessagePath { intermediate_nodes, mut destination } = path; - if let Destination::BlindedPath(BlindedPath { ref blinded_hops, .. }) = destination { - if blinded_hops.len() < 2 { - return Err(SendError::TooFewBlindedHops); - } - } - - if message.tlv_type() < 64 { return Err(SendError::InvalidMessage) } - - // If we are sending straight to a blinded path and we are the introduction node, we need to - // advance the blinded path by 1 hop so the second hop is the new introduction node. - if intermediate_nodes.len() == 0 { - if let Destination::BlindedPath(ref mut blinded_path) = destination { - let our_node_id = node_signer.get_node_id(Recipient::Node) - .map_err(|()| SendError::GetNodeIdFailed)?; - if blinded_path.introduction_node_id == our_node_id { - advance_path_by_one(blinded_path, node_signer, &secp_ctx) - .map_err(|()| SendError::BlindedPathAdvanceFailed)?; - } - } - } - - let blinding_secret_bytes = entropy_source.get_secure_random_bytes(); - let blinding_secret = SecretKey::from_slice(&blinding_secret_bytes[..]).expect("RNG is busted"); - let (introduction_node_id, blinding_point) = if intermediate_nodes.len() != 0 { - (intermediate_nodes[0], PublicKey::from_secret_key(&secp_ctx, &blinding_secret)) - } else { - match destination { - Destination::Node(pk) => (pk, PublicKey::from_secret_key(&secp_ctx, &blinding_secret)), - Destination::BlindedPath(BlindedPath { introduction_node_id, blinding_point, .. }) => - (introduction_node_id, blinding_point), - } - }; - let (packet_payloads, packet_keys) = packet_payloads_and_keys( - &secp_ctx, &intermediate_nodes, destination, message, reply_path, &blinding_secret) - .map_err(|e| SendError::Secp256k1(e))?; - - let prng_seed = entropy_source.get_secure_random_bytes(); - let onion_routing_packet = construct_onion_message_packet( - packet_payloads, packet_keys, prng_seed).map_err(|()| SendError::TooBigPacket)?; - - Ok((introduction_node_id, msgs::OnionMessage { - blinding_point, - onion_routing_packet - })) - } - fn respond_with_onion_message( &self, response: OnionMessageContents, path_id: Option<[u8; 32]>, reply_path: Option diff --git a/lightning/src/routing/router.rs b/lightning/src/routing/router.rs index 1297322fbd4..28d452c56b6 100644 --- a/lightning/src/routing/router.rs +++ b/lightning/src/routing/router.rs @@ -112,12 +112,12 @@ pub trait Router { /// [`find_route`]. /// /// [`ScoreLookUp`]: crate::routing::scoring::ScoreLookUp -pub struct ScorerAccountingForInFlightHtlcs<'a, SP: Sized, Sc: 'a + ScoreLookUp, S: Deref> { +pub struct ScorerAccountingForInFlightHtlcs<'a, S: Deref> where S::Target: ScoreLookUp { scorer: S, // Maps a channel's short channel id and its direction to the liquidity used up. inflight_htlcs: &'a InFlightHtlcs, } -impl<'a, SP: Sized, Sc: ScoreLookUp, S: Deref> ScorerAccountingForInFlightHtlcs<'a, SP, Sc, S> { +impl<'a, S: Deref> ScorerAccountingForInFlightHtlcs<'a, S> where S::Target: ScoreLookUp { /// Initialize a new `ScorerAccountingForInFlightHtlcs`. pub fn new(scorer: S, inflight_htlcs: &'a InFlightHtlcs) -> Self { ScorerAccountingForInFlightHtlcs { @@ -127,13 +127,8 @@ impl<'a, SP: Sized, Sc: ScoreLookUp, S: Deref> Sc } } -#[cfg(c_bindings)] -impl<'a, SP: Sized, Sc: ScoreLookUp, S: Deref> Writeable for ScorerAccountingForInFlightHtlcs<'a, SP, Sc, S> { - fn write(&self, writer: &mut W) -> Result<(), io::Error> { self.scorer.write(writer) } -} - -impl<'a, SP: Sized, Sc: 'a + ScoreLookUp, S: Deref> ScoreLookUp for ScorerAccountingForInFlightHtlcs<'a, SP, Sc, S> { - type ScoreParams = Sc::ScoreParams; +impl<'a, S: Deref> ScoreLookUp for ScorerAccountingForInFlightHtlcs<'a, S> where S::Target: ScoreLookUp { + type ScoreParams = ::ScoreParams; fn channel_penalty_msat(&self, short_channel_id: u64, source: &NodeId, target: &NodeId, usage: ChannelUsage, score_params: &Self::ScoreParams) -> u64 { if let Some(used_liquidity) = self.inflight_htlcs.used_liquidity_msat( source, target, short_channel_id diff --git a/lightning/src/routing/scoring.rs b/lightning/src/routing/scoring.rs index c790f5df5c3..207e1d69bd9 100644 --- a/lightning/src/routing/scoring.rs +++ b/lightning/src/routing/scoring.rs @@ -89,7 +89,7 @@ macro_rules! define_score { ($($supertrait: path)*) => { /// `ScoreLookUp` is used to determine the penalty for a given channel. /// /// Scoring is in terms of fees willing to be paid in order to avoid routing through a channel. -pub trait ScoreLookUp $(: $supertrait)* { +pub trait ScoreLookUp { /// A configurable type which should contain various passed-in parameters for configuring the scorer, /// on a per-routefinding-call basis through to the scorer methods, /// which are used to determine the parameters for the suitability of channels for use. @@ -108,7 +108,7 @@ pub trait ScoreLookUp $(: $supertrait)* { } /// `ScoreUpdate` is used to update the scorer's internal state after a payment attempt. -pub trait ScoreUpdate $(: $supertrait)* { +pub trait ScoreUpdate { /// Handles updating channel penalties after failing to route through a channel. fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64); @@ -122,8 +122,20 @@ pub trait ScoreUpdate $(: $supertrait)* { fn probe_successful(&mut self, path: &Path); } -impl, T: Deref $(+ $supertrait)*> ScoreLookUp for T { - type ScoreParams = SP; +/// A trait which can both lookup and update routing channel penalty scores. +/// +/// This is used in places where both bounds are required and implemented for all types which +/// implement [`ScoreLookUp`] and [`ScoreUpdate`]. +/// +/// Bindings users may need to manually implement this for their custom scoring implementations. +pub trait Score : ScoreLookUp + ScoreUpdate $(+ $supertrait)* {} + +#[cfg(not(c_bindings))] +impl Score for T {} + +#[cfg(not(c_bindings))] +impl> ScoreLookUp for T { + type ScoreParams = S::ScoreParams; fn channel_penalty_msat( &self, short_channel_id: u64, source: &NodeId, target: &NodeId, usage: ChannelUsage, score_params: &Self::ScoreParams ) -> u64 { @@ -131,7 +143,8 @@ impl, T: Deref $(+ $supert } } -impl $(+ $supertrait)*> ScoreUpdate for T { +#[cfg(not(c_bindings))] +impl> ScoreUpdate for T { fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64) { self.deref_mut().payment_path_failed(path, short_channel_id) } @@ -192,7 +205,7 @@ pub trait WriteableScore<'a>: LockableScore<'a> + Writeable {} #[cfg(not(c_bindings))] impl<'a, T> WriteableScore<'a> for T where T: LockableScore<'a> + Writeable {} #[cfg(not(c_bindings))] -impl<'a, T: 'a + ScoreLookUp + ScoreUpdate> LockableScore<'a> for Mutex { +impl<'a, T: Score + 'a> LockableScore<'a> for Mutex { type ScoreUpdate = T; type ScoreLookUp = T; @@ -209,7 +222,7 @@ impl<'a, T: 'a + ScoreLookUp + ScoreUpdate> LockableScore<'a> for Mutex { } #[cfg(not(c_bindings))] -impl<'a, T: 'a + ScoreUpdate + ScoreLookUp> LockableScore<'a> for RefCell { +impl<'a, T: Score + 'a> LockableScore<'a> for RefCell { type ScoreUpdate = T; type ScoreLookUp = T; @@ -226,7 +239,7 @@ impl<'a, T: 'a + ScoreUpdate + ScoreLookUp> LockableScore<'a> for RefCell { } #[cfg(not(c_bindings))] -impl<'a, SP:Sized, T: 'a + ScoreUpdate + ScoreLookUp> LockableScore<'a> for RwLock { +impl<'a, T: Score + 'a> LockableScore<'a> for RwLock { type ScoreUpdate = T; type ScoreLookUp = T; @@ -244,12 +257,12 @@ impl<'a, SP:Sized, T: 'a + ScoreUpdate + ScoreLookUp> Lockabl #[cfg(c_bindings)] /// A concrete implementation of [`LockableScore`] which supports multi-threading. -pub struct MultiThreadedLockableScore { +pub struct MultiThreadedLockableScore { score: RwLock, } #[cfg(c_bindings)] -impl<'a, SP:Sized, T: 'a + ScoreLookUp + ScoreUpdate> LockableScore<'a> for MultiThreadedLockableScore { +impl<'a, T: Score + 'a> LockableScore<'a> for MultiThreadedLockableScore { type ScoreUpdate = T; type ScoreLookUp = T; type WriteLocked = MultiThreadedScoreLockWrite<'a, Self::ScoreUpdate>; @@ -265,17 +278,17 @@ impl<'a, SP:Sized, T: 'a + ScoreLookUp + ScoreUpdate> Lockable } #[cfg(c_bindings)] -impl Writeable for MultiThreadedLockableScore { +impl Writeable for MultiThreadedLockableScore { fn write(&self, writer: &mut W) -> Result<(), io::Error> { self.score.read().unwrap().write(writer) } } #[cfg(c_bindings)] -impl<'a, T: 'a + ScoreUpdate + ScoreLookUp> WriteableScore<'a> for MultiThreadedLockableScore {} +impl<'a, T: Score + 'a> WriteableScore<'a> for MultiThreadedLockableScore {} #[cfg(c_bindings)] -impl MultiThreadedLockableScore { +impl MultiThreadedLockableScore { /// Creates a new [`MultiThreadedLockableScore`] given an underlying [`Score`]. pub fn new(score: T) -> Self { MultiThreadedLockableScore { score: RwLock::new(score) } @@ -284,14 +297,14 @@ impl MultiThreadedLockableScore { #[cfg(c_bindings)] /// A locked `MultiThreadedLockableScore`. -pub struct MultiThreadedScoreLockRead<'a, T: ScoreLookUp>(RwLockReadGuard<'a, T>); +pub struct MultiThreadedScoreLockRead<'a, T: Score>(RwLockReadGuard<'a, T>); #[cfg(c_bindings)] /// A locked `MultiThreadedLockableScore`. -pub struct MultiThreadedScoreLockWrite<'a, T: ScoreUpdate>(RwLockWriteGuard<'a, T>); +pub struct MultiThreadedScoreLockWrite<'a, T: Score>(RwLockWriteGuard<'a, T>); #[cfg(c_bindings)] -impl<'a, T: 'a + ScoreLookUp> Deref for MultiThreadedScoreLockRead<'a, T> { +impl<'a, T: 'a + Score> Deref for MultiThreadedScoreLockRead<'a, T> { type Target = T; fn deref(&self) -> &Self::Target { @@ -300,14 +313,24 @@ impl<'a, T: 'a + ScoreLookUp> Deref for MultiThreadedScoreLockRead<'a, T> { } #[cfg(c_bindings)] -impl<'a, T: 'a + ScoreUpdate> Writeable for MultiThreadedScoreLockWrite<'a, T> { +impl<'a, T: Score> ScoreLookUp for MultiThreadedScoreLockRead<'a, T> { + type ScoreParams = T::ScoreParams; + fn channel_penalty_msat(&self, short_channel_id: u64, source: &NodeId, + target: &NodeId, usage: ChannelUsage, score_params: &Self::ScoreParams + ) -> u64 { + self.0.channel_penalty_msat(short_channel_id, source, target, usage, score_params) + } +} + +#[cfg(c_bindings)] +impl<'a, T: Score> Writeable for MultiThreadedScoreLockWrite<'a, T> { fn write(&self, writer: &mut W) -> Result<(), io::Error> { self.0.write(writer) } } #[cfg(c_bindings)] -impl<'a, T: 'a + ScoreUpdate> Deref for MultiThreadedScoreLockWrite<'a, T> { +impl<'a, T: 'a + Score> Deref for MultiThreadedScoreLockWrite<'a, T> { type Target = T; fn deref(&self) -> &Self::Target { @@ -316,12 +339,31 @@ impl<'a, T: 'a + ScoreUpdate> Deref for MultiThreadedScoreLockWrite<'a, T> { } #[cfg(c_bindings)] -impl<'a, T: 'a + ScoreUpdate> DerefMut for MultiThreadedScoreLockWrite<'a, T> { +impl<'a, T: 'a + Score> DerefMut for MultiThreadedScoreLockWrite<'a, T> { fn deref_mut(&mut self) -> &mut Self::Target { self.0.deref_mut() } } +#[cfg(c_bindings)] +impl<'a, T: Score> ScoreUpdate for MultiThreadedScoreLockWrite<'a, T> { + fn payment_path_failed(&mut self, path: &Path, short_channel_id: u64) { + self.0.payment_path_failed(path, short_channel_id) + } + + fn payment_path_successful(&mut self, path: &Path) { + self.0.payment_path_successful(path) + } + + fn probe_failed(&mut self, path: &Path, short_channel_id: u64) { + self.0.probe_failed(path, short_channel_id) + } + + fn probe_successful(&mut self, path: &Path) { + self.0.probe_successful(path) + } +} + /// Proposed use of a channel passed as a parameter to [`ScoreLookUp::channel_penalty_msat`]. #[derive(Clone, Copy, Debug, PartialEq)] @@ -1417,6 +1459,10 @@ impl>, L: Deref, T: Time> ScoreUpdate for Prob } } +#[cfg(c_bindings)] +impl>, L: Deref, T: Time> Score for ProbabilisticScorerUsingTime +where L::Target: Logger {} + mod approx { const BITS: u32 = 64; const HIGHEST_BIT: u32 = BITS - 1; diff --git a/lightning/src/sign/mod.rs b/lightning/src/sign/mod.rs index 291fbb10922..0a7d993df72 100644 --- a/lightning/src/sign/mod.rs +++ b/lightning/src/sign/mod.rs @@ -275,6 +275,9 @@ impl SpendableOutputDescriptor { /// /// Note that this does not include any signatures, just the information required to /// construct the transaction and sign it. + /// + /// This is not exported to bindings users as there is no standard serialization for an input. + /// See [`Self::create_spendable_outputs_psbt`] instead. pub fn to_psbt_input(&self) -> bitcoin::psbt::Input { match self { SpendableOutputDescriptor::StaticOutput { output, .. } => { @@ -901,42 +904,68 @@ impl InMemorySigner { /// Returns the counterparty's pubkeys. /// - /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before. - pub fn counterparty_pubkeys(&self) -> &ChannelPublicKeys { &self.get_channel_parameters().counterparty_parameters.as_ref().unwrap().pubkeys } + /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called. + /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation. + pub fn counterparty_pubkeys(&self) -> Option<&ChannelPublicKeys> { + self.get_channel_parameters() + .and_then(|params| params.counterparty_parameters.as_ref().map(|params| ¶ms.pubkeys)) + } + /// Returns the `contest_delay` value specified by our counterparty and applied on holder-broadcastable /// transactions, i.e., the amount of time that we have to wait to recover our funds if we /// broadcast a transaction. /// - /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before. - pub fn counterparty_selected_contest_delay(&self) -> u16 { self.get_channel_parameters().counterparty_parameters.as_ref().unwrap().selected_contest_delay } + /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called. + /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation. + pub fn counterparty_selected_contest_delay(&self) -> Option { + self.get_channel_parameters() + .and_then(|params| params.counterparty_parameters.as_ref().map(|params| params.selected_contest_delay)) + } + /// Returns the `contest_delay` value specified by us and applied on transactions broadcastable /// by our counterparty, i.e., the amount of time that they have to wait to recover their funds /// if they broadcast a transaction. /// - /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before. - pub fn holder_selected_contest_delay(&self) -> u16 { self.get_channel_parameters().holder_selected_contest_delay } + /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called. + /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation. + pub fn holder_selected_contest_delay(&self) -> Option { + self.get_channel_parameters().map(|params| params.holder_selected_contest_delay) + } + /// Returns whether the holder is the initiator. /// - /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before. - pub fn is_outbound(&self) -> bool { self.get_channel_parameters().is_outbound_from_holder } + /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called. + /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation. + pub fn is_outbound(&self) -> Option { + self.get_channel_parameters().map(|params| params.is_outbound_from_holder) + } + /// Funding outpoint /// - /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before. - pub fn funding_outpoint(&self) -> &OutPoint { self.get_channel_parameters().funding_outpoint.as_ref().unwrap() } + /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called. + /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation. + pub fn funding_outpoint(&self) -> Option<&OutPoint> { + self.get_channel_parameters().map(|params| params.funding_outpoint.as_ref()).flatten() + } + /// Returns a [`ChannelTransactionParameters`] for this channel, to be used when verifying or /// building transactions. /// - /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before. - pub fn get_channel_parameters(&self) -> &ChannelTransactionParameters { - self.channel_parameters.as_ref().unwrap() + /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called. + /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation. + pub fn get_channel_parameters(&self) -> Option<&ChannelTransactionParameters> { + self.channel_parameters.as_ref() } + /// Returns the channel type features of the channel parameters. Should be helpful for /// determining a channel's category, i. e. legacy/anchors/taproot/etc. /// - /// Will panic if [`ChannelSigner::provide_channel_parameters`] has not been called before. - pub fn channel_type_features(&self) -> &ChannelTypeFeatures { - &self.get_channel_parameters().channel_type_features + /// Will return `None` if [`ChannelSigner::provide_channel_parameters`] has not been called. + /// In general, this is safe to `unwrap` only in [`ChannelSigner`] implementation. + pub fn channel_type_features(&self) -> Option<&ChannelTypeFeatures> { + self.get_channel_parameters().map(|params| ¶ms.channel_type_features) } + /// Sign the single input of `spend_tx` at index `input_idx`, which spends the output described /// by `descriptor`, returning the witness stack for the input. /// @@ -955,14 +984,20 @@ impl InMemorySigner { if spend_tx.input[input_idx].previous_output != descriptor.outpoint.into_bitcoin_outpoint() { return Err(()); } let remotepubkey = bitcoin::PublicKey::new(self.pubkeys().payment_point); - let witness_script = if self.channel_type_features().supports_anchors_zero_fee_htlc_tx() { + // We cannot always assume that `channel_parameters` is set, so can't just call + // `self.channel_parameters()` or anything that relies on it + let supports_anchors_zero_fee_htlc_tx = self.channel_type_features() + .map(|features| features.supports_anchors_zero_fee_htlc_tx()) + .unwrap_or(false); + + let witness_script = if supports_anchors_zero_fee_htlc_tx { chan_utils::get_to_countersignatory_with_anchors_redeemscript(&remotepubkey.inner) } else { Script::new_p2pkh(&remotepubkey.pubkey_hash()) }; let sighash = hash_to_message!(&sighash::SighashCache::new(spend_tx).segwit_signature_hash(input_idx, &witness_script, descriptor.output.value, EcdsaSighashType::All).unwrap()[..]); let remotesig = sign_with_aux_rand(secp_ctx, &sighash, &self.payment_key, &self); - let payment_script = if self.channel_type_features().supports_anchors_zero_fee_htlc_tx() { + let payment_script = if supports_anchors_zero_fee_htlc_tx { witness_script.to_v0_p2wsh() } else { Script::new_v0_p2wpkh(&remotepubkey.wpubkey_hash().unwrap()) @@ -973,7 +1008,7 @@ impl InMemorySigner { let mut witness = Vec::with_capacity(2); witness.push(remotesig.serialize_der().to_vec()); witness[0].push(EcdsaSighashType::All as u8); - if self.channel_type_features().supports_anchors_zero_fee_htlc_tx() { + if supports_anchors_zero_fee_htlc_tx { witness.push(witness_script.to_bytes()); } else { witness.push(remotepubkey.to_bytes()); @@ -1067,13 +1102,16 @@ impl ChannelSigner for InMemorySigner { } } +const MISSING_PARAMS_ERR: &'static str = "ChannelSigner::provide_channel_parameters must be called before signing operations"; + impl EcdsaChannelSigner for InMemorySigner { fn sign_counterparty_commitment(&self, commitment_tx: &CommitmentTransaction, _preimages: Vec, secp_ctx: &Secp256k1) -> Result<(Signature, Vec), ()> { let trusted_tx = commitment_tx.trust(); let keys = trusted_tx.keys(); let funding_pubkey = PublicKey::from_secret_key(secp_ctx, &self.funding_key); - let channel_funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &self.counterparty_pubkeys().funding_pubkey); + let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR); + let channel_funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &counterparty_keys.funding_pubkey); let built_tx = trusted_tx.built_transaction(); // println!("sign_counterparty_commitment tx {:?} {:?} channel_value_satoshis {}", built_tx.txid, built_tx.transaction.encode(), self.channel_value_satoshis); @@ -1082,10 +1120,13 @@ impl EcdsaChannelSigner for InMemorySigner { let mut htlc_sigs = Vec::with_capacity(commitment_tx.htlcs().len()); for htlc in commitment_tx.htlcs() { - let channel_parameters = self.get_channel_parameters(); - let htlc_tx = chan_utils::build_htlc_transaction(&commitment_txid, commitment_tx.feerate_per_kw(), self.holder_selected_contest_delay(), htlc, &channel_parameters.channel_type_features, &keys.broadcaster_delayed_payment_key, &keys.revocation_key); - let htlc_redeemscript = chan_utils::get_htlc_redeemscript(&htlc, self.channel_type_features(), &keys); - let htlc_sighashtype = if self.channel_type_features().supports_anchors_zero_fee_htlc_tx() { EcdsaSighashType::SinglePlusAnyoneCanPay } else { EcdsaSighashType::All }; + let channel_parameters = self.get_channel_parameters().expect(MISSING_PARAMS_ERR); + let holder_selected_contest_delay = + self.holder_selected_contest_delay().expect(MISSING_PARAMS_ERR); + let chan_type = &channel_parameters.channel_type_features; + let htlc_tx = chan_utils::build_htlc_transaction(&commitment_txid, commitment_tx.feerate_per_kw(), holder_selected_contest_delay, htlc, chan_type, &keys.broadcaster_delayed_payment_key, &keys.revocation_key); + let htlc_redeemscript = chan_utils::get_htlc_redeemscript(&htlc, chan_type, &keys); + let htlc_sighashtype = if chan_type.supports_anchors_zero_fee_htlc_tx() { EcdsaSighashType::SinglePlusAnyoneCanPay } else { EcdsaSighashType::All }; let htlc_sighash = hash_to_message!(&sighash::SighashCache::new(&htlc_tx).segwit_signature_hash(0, &htlc_redeemscript, htlc.amount_msat / 1000, htlc_sighashtype).unwrap()[..]); let holder_htlc_key = chan_utils::derive_private_key(&secp_ctx, &keys.per_commitment_point, &self.htlc_base_key); htlc_sigs.push(sign(secp_ctx, &htlc_sighash, &holder_htlc_key)); @@ -1100,10 +1141,11 @@ impl EcdsaChannelSigner for InMemorySigner { fn sign_holder_commitment_and_htlcs(&self, commitment_tx: &HolderCommitmentTransaction, secp_ctx: &Secp256k1) -> Result<(Signature, Vec), ()> { let funding_pubkey = PublicKey::from_secret_key(secp_ctx, &self.funding_key); - let funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &self.counterparty_pubkeys().funding_pubkey); + let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR); + let funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &counterparty_keys.funding_pubkey); let trusted_tx = commitment_tx.trust(); let sig = trusted_tx.built_transaction().sign_holder_commitment(&self.funding_key, &funding_redeemscript, self.channel_value_satoshis, &self, secp_ctx); - let channel_parameters = self.get_channel_parameters(); + let channel_parameters = self.get_channel_parameters().expect(MISSING_PARAMS_ERR); let htlc_sigs = trusted_tx.get_htlc_sigs(&self.htlc_base_key, &channel_parameters.as_holder_broadcastable(), &self, secp_ctx)?; Ok((sig, htlc_sigs)) } @@ -1111,10 +1153,11 @@ impl EcdsaChannelSigner for InMemorySigner { #[cfg(any(test,feature = "unsafe_revoked_tx_signing"))] fn unsafe_sign_holder_commitment_and_htlcs(&self, commitment_tx: &HolderCommitmentTransaction, secp_ctx: &Secp256k1) -> Result<(Signature, Vec), ()> { let funding_pubkey = PublicKey::from_secret_key(secp_ctx, &self.funding_key); - let funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &self.counterparty_pubkeys().funding_pubkey); + let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR); + let funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &counterparty_keys.funding_pubkey); let trusted_tx = commitment_tx.trust(); let sig = trusted_tx.built_transaction().sign_holder_commitment(&self.funding_key, &funding_redeemscript, self.channel_value_satoshis, &self, secp_ctx); - let channel_parameters = self.get_channel_parameters(); + let channel_parameters = self.get_channel_parameters().expect(MISSING_PARAMS_ERR); let htlc_sigs = trusted_tx.get_htlc_sigs(&self.htlc_base_key, &channel_parameters.as_holder_broadcastable(), &self, secp_ctx)?; Ok((sig, htlc_sigs)) } @@ -1124,8 +1167,11 @@ impl EcdsaChannelSigner for InMemorySigner { let per_commitment_point = PublicKey::from_secret_key(secp_ctx, &per_commitment_key); let revocation_pubkey = chan_utils::derive_public_revocation_key(&secp_ctx, &per_commitment_point, &self.pubkeys().revocation_basepoint); let witness_script = { - let counterparty_delayedpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &self.counterparty_pubkeys().delayed_payment_basepoint); - chan_utils::get_revokeable_redeemscript(&revocation_pubkey, self.holder_selected_contest_delay(), &counterparty_delayedpubkey) + let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR); + let holder_selected_contest_delay = + self.holder_selected_contest_delay().expect(MISSING_PARAMS_ERR); + let counterparty_delayedpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &counterparty_keys.delayed_payment_basepoint); + chan_utils::get_revokeable_redeemscript(&revocation_pubkey, holder_selected_contest_delay, &counterparty_delayedpubkey) }; let mut sighash_parts = sighash::SighashCache::new(justice_tx); let sighash = hash_to_message!(&sighash_parts.segwit_signature_hash(input, &witness_script, amount, EcdsaSighashType::All).unwrap()[..]); @@ -1137,9 +1183,11 @@ impl EcdsaChannelSigner for InMemorySigner { let per_commitment_point = PublicKey::from_secret_key(secp_ctx, &per_commitment_key); let revocation_pubkey = chan_utils::derive_public_revocation_key(&secp_ctx, &per_commitment_point, &self.pubkeys().revocation_basepoint); let witness_script = { - let counterparty_htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &self.counterparty_pubkeys().htlc_basepoint); + let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR); + let counterparty_htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &counterparty_keys.htlc_basepoint); let holder_htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &self.pubkeys().htlc_basepoint); - chan_utils::get_htlc_redeemscript_with_explicit_keys(&htlc, self.channel_type_features(), &counterparty_htlcpubkey, &holder_htlcpubkey, &revocation_pubkey) + let chan_type = self.channel_type_features().expect(MISSING_PARAMS_ERR); + chan_utils::get_htlc_redeemscript_with_explicit_keys(&htlc, chan_type, &counterparty_htlcpubkey, &holder_htlcpubkey, &revocation_pubkey) }; let mut sighash_parts = sighash::SighashCache::new(justice_tx); let sighash = hash_to_message!(&sighash_parts.segwit_signature_hash(input, &witness_script, amount, EcdsaSighashType::All).unwrap()[..]); @@ -1163,9 +1211,11 @@ impl EcdsaChannelSigner for InMemorySigner { fn sign_counterparty_htlc_transaction(&self, htlc_tx: &Transaction, input: usize, amount: u64, per_commitment_point: &PublicKey, htlc: &HTLCOutputInCommitment, secp_ctx: &Secp256k1) -> Result { let htlc_key = chan_utils::derive_private_key(&secp_ctx, &per_commitment_point, &self.htlc_base_key); let revocation_pubkey = chan_utils::derive_public_revocation_key(&secp_ctx, &per_commitment_point, &self.pubkeys().revocation_basepoint); - let counterparty_htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &self.counterparty_pubkeys().htlc_basepoint); + let counterparty_keys = self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR); + let counterparty_htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &counterparty_keys.htlc_basepoint); let htlcpubkey = chan_utils::derive_public_key(&secp_ctx, &per_commitment_point, &self.pubkeys().htlc_basepoint); - let witness_script = chan_utils::get_htlc_redeemscript_with_explicit_keys(&htlc, self.channel_type_features(), &counterparty_htlcpubkey, &htlcpubkey, &revocation_pubkey); + let chan_type = self.channel_type_features().expect(MISSING_PARAMS_ERR); + let witness_script = chan_utils::get_htlc_redeemscript_with_explicit_keys(&htlc, chan_type, &counterparty_htlcpubkey, &htlcpubkey, &revocation_pubkey); let mut sighash_parts = sighash::SighashCache::new(htlc_tx); let sighash = hash_to_message!(&sighash_parts.segwit_signature_hash(input, &witness_script, amount, EcdsaSighashType::All).unwrap()[..]); Ok(sign_with_aux_rand(secp_ctx, &sighash, &htlc_key, &self)) @@ -1173,7 +1223,8 @@ impl EcdsaChannelSigner for InMemorySigner { fn sign_closing_transaction(&self, closing_tx: &ClosingTransaction, secp_ctx: &Secp256k1) -> Result { let funding_pubkey = PublicKey::from_secret_key(secp_ctx, &self.funding_key); - let channel_funding_redeemscript = make_funding_redeemscript(&funding_pubkey, &self.counterparty_pubkeys().funding_pubkey); + let counterparty_funding_key = &self.counterparty_pubkeys().expect(MISSING_PARAMS_ERR).funding_pubkey; + let channel_funding_redeemscript = make_funding_redeemscript(&funding_pubkey, counterparty_funding_key); Ok(closing_tx.trust().sign(&self.funding_key, &channel_funding_redeemscript, self.channel_value_satoshis, secp_ctx)) } diff --git a/lightning/src/util/persist.rs b/lightning/src/util/persist.rs index 2a022c37cc4..35e473c42a6 100644 --- a/lightning/src/util/persist.rs +++ b/lightning/src/util/persist.rs @@ -397,11 +397,7 @@ where pub fn new( kv_store: K, logger: L, maximum_pending_updates: u64, entropy_source: ES, signer_provider: SP, - ) -> Self - where - ES::Target: EntropySource + Sized, - SP::Target: SignerProvider + Sized, - { + ) -> Self { MonitorUpdatingPersister { kv_store, logger, @@ -416,12 +412,10 @@ where /// It is extremely important that your [`KVStore::read`] implementation uses the /// [`io::ErrorKind::NotFound`] variant correctly. For more information, please see the /// documentation for [`MonitorUpdatingPersister`]. - pub fn read_all_channel_monitors_with_updates( - &self, broadcaster: B, fee_estimator: F, + pub fn read_all_channel_monitors_with_updates( + &self, broadcaster: &B, fee_estimator: &F, ) -> Result::Signer>)>, io::Error> where - ES::Target: EntropySource + Sized, - SP::Target: SignerProvider + Sized, B::Target: BroadcasterInterface, F::Target: FeeEstimator, { @@ -432,8 +426,8 @@ where let mut res = Vec::with_capacity(monitor_list.len()); for monitor_key in monitor_list { res.push(self.read_channel_monitor_with_updates( - &broadcaster, - fee_estimator.clone(), + broadcaster, + fee_estimator, monitor_key, )?) } @@ -457,12 +451,10 @@ where /// /// Loading a large number of monitors will be faster if done in parallel. You can use this /// function to accomplish this. Take care to limit the number of parallel readers. - pub fn read_channel_monitor_with_updates( - &self, broadcaster: &B, fee_estimator: F, monitor_key: String, + pub fn read_channel_monitor_with_updates( + &self, broadcaster: &B, fee_estimator: &F, monitor_key: String, ) -> Result<(BlockHash, ChannelMonitor<::Signer>), io::Error> where - ES::Target: EntropySource + Sized, - SP::Target: SignerProvider + Sized, B::Target: BroadcasterInterface, F::Target: FeeEstimator, { @@ -484,7 +476,7 @@ where Err(err) => return Err(err), }; - monitor.update_monitor(&update, broadcaster, fee_estimator.clone(), &self.logger) + monitor.update_monitor(&update, broadcaster, fee_estimator, &self.logger) .map_err(|e| { log_error!( self.logger, @@ -949,17 +941,17 @@ mod tests { // Check that the persisted channel data is empty before any channels are // open. let mut persisted_chan_data_0 = persister_0.read_all_channel_monitors_with_updates( - broadcaster_0, &chanmon_cfgs[0].fee_estimator).unwrap(); + &broadcaster_0, &&chanmon_cfgs[0].fee_estimator).unwrap(); assert_eq!(persisted_chan_data_0.len(), 0); let mut persisted_chan_data_1 = persister_1.read_all_channel_monitors_with_updates( - broadcaster_1, &chanmon_cfgs[1].fee_estimator).unwrap(); + &broadcaster_1, &&chanmon_cfgs[1].fee_estimator).unwrap(); assert_eq!(persisted_chan_data_1.len(), 0); // Helper to make sure the channel is on the expected update ID. macro_rules! check_persisted_data { ($expected_update_id: expr) => { persisted_chan_data_0 = persister_0.read_all_channel_monitors_with_updates( - broadcaster_0, &chanmon_cfgs[0].fee_estimator).unwrap(); + &broadcaster_0, &&chanmon_cfgs[0].fee_estimator).unwrap(); // check that we stored only one monitor assert_eq!(persisted_chan_data_0.len(), 1); for (_, mon) in persisted_chan_data_0.iter() { @@ -978,7 +970,7 @@ mod tests { } } persisted_chan_data_1 = persister_1.read_all_channel_monitors_with_updates( - broadcaster_1, &chanmon_cfgs[1].fee_estimator).unwrap(); + &broadcaster_1, &&chanmon_cfgs[1].fee_estimator).unwrap(); assert_eq!(persisted_chan_data_1.len(), 1); for (_, mon) in persisted_chan_data_1.iter() { assert_eq!(mon.get_latest_update_id(), $expected_update_id); @@ -1043,7 +1035,7 @@ mod tests { check_persisted_data!(CLOSED_CHANNEL_UPDATE_ID); // Make sure the expected number of stale updates is present. - let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(broadcaster_0, &chanmon_cfgs[0].fee_estimator).unwrap(); + let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(&broadcaster_0, &&chanmon_cfgs[0].fee_estimator).unwrap(); let (_, monitor) = &persisted_chan_data[0]; let monitor_name = MonitorName::from(monitor.get_funding_txo().0); // The channel should have 0 updates, as it wrote a full monitor and consolidated. @@ -1151,7 +1143,7 @@ mod tests { // Check that the persisted channel data is empty before any channels are // open. - let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(broadcaster_0, &chanmon_cfgs[0].fee_estimator).unwrap(); + let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(&broadcaster_0, &&chanmon_cfgs[0].fee_estimator).unwrap(); assert_eq!(persisted_chan_data.len(), 0); // Create some initial channel @@ -1162,7 +1154,7 @@ mod tests { send_payment(&nodes[1], &vec![&nodes[0]][..], 4_000_000); // Get the monitor and make a fake stale update at update_id=1 (lowest height of an update possible) - let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(broadcaster_0, &chanmon_cfgs[0].fee_estimator).unwrap(); + let persisted_chan_data = persister_0.read_all_channel_monitors_with_updates(&broadcaster_0, &&chanmon_cfgs[0].fee_estimator).unwrap(); let (_, monitor) = &persisted_chan_data[0]; let monitor_name = MonitorName::from(monitor.get_funding_txo().0); persister_0 diff --git a/lightning/src/util/ser.rs b/lightning/src/util/ser.rs index af4de88a1a7..7971f35fc97 100644 --- a/lightning/src/util/ser.rs +++ b/lightning/src/util/ser.rs @@ -17,7 +17,7 @@ use crate::prelude::*; use crate::io::{self, Read, Seek, Write}; use crate::io_extras::{copy, sink}; use core::hash::Hash; -use crate::sync::Mutex; +use crate::sync::{Mutex, RwLock}; use core::cmp; use core::convert::TryFrom; use core::ops::Deref; @@ -1195,6 +1195,18 @@ impl Writeable for Mutex { } } +impl Readable for RwLock { + fn read(r: &mut R) -> Result { + let t: T = Readable::read(r)?; + Ok(RwLock::new(t)) + } +} +impl Writeable for RwLock { + fn write(&self, w: &mut W) -> Result<(), io::Error> { + self.read().unwrap().write(w) + } +} + impl Readable for (A, B) { fn read(r: &mut R) -> Result { let a: A = Readable::read(r)?; diff --git a/lightning/src/util/test_channel_signer.rs b/lightning/src/util/test_channel_signer.rs index 7439f9004b2..417efcb6a43 100644 --- a/lightning/src/util/test_channel_signer.rs +++ b/lightning/src/util/test_channel_signer.rs @@ -89,7 +89,7 @@ impl TestChannelSigner { } } - pub fn channel_type_features(&self) -> &ChannelTypeFeatures { self.inner.channel_type_features() } + pub fn channel_type_features(&self) -> &ChannelTypeFeatures { self.inner.channel_type_features().unwrap() } #[cfg(test)] pub fn get_enforcement_state(&self) -> MutexGuard { @@ -164,7 +164,7 @@ impl EcdsaChannelSigner for TestChannelSigner { fn sign_holder_commitment_and_htlcs(&self, commitment_tx: &HolderCommitmentTransaction, secp_ctx: &Secp256k1) -> Result<(Signature, Vec), ()> { let trusted_tx = self.verify_holder_commitment_tx(commitment_tx, secp_ctx); let commitment_txid = trusted_tx.txid(); - let holder_csv = self.inner.counterparty_selected_contest_delay(); + let holder_csv = self.inner.counterparty_selected_contest_delay().unwrap(); let state = self.state.lock().unwrap(); let commitment_number = trusted_tx.commitment_number(); @@ -225,7 +225,7 @@ impl EcdsaChannelSigner for TestChannelSigner { } fn sign_closing_transaction(&self, closing_tx: &ClosingTransaction, secp_ctx: &Secp256k1) -> Result { - closing_tx.verify(self.inner.funding_outpoint().into_bitcoin_outpoint()) + closing_tx.verify(self.inner.funding_outpoint().unwrap().into_bitcoin_outpoint()) .expect("derived different closing transaction"); Ok(self.inner.sign_closing_transaction(closing_tx, secp_ctx).unwrap()) } @@ -266,15 +266,17 @@ impl Writeable for TestChannelSigner { impl TestChannelSigner { fn verify_counterparty_commitment_tx<'a, T: secp256k1::Signing + secp256k1::Verification>(&self, commitment_tx: &'a CommitmentTransaction, secp_ctx: &Secp256k1) -> TrustedCommitmentTransaction<'a> { - commitment_tx.verify(&self.inner.get_channel_parameters().as_counterparty_broadcastable(), - self.inner.counterparty_pubkeys(), self.inner.pubkeys(), secp_ctx) - .expect("derived different per-tx keys or built transaction") + commitment_tx.verify( + &self.inner.get_channel_parameters().unwrap().as_counterparty_broadcastable(), + self.inner.counterparty_pubkeys().unwrap(), self.inner.pubkeys(), secp_ctx + ).expect("derived different per-tx keys or built transaction") } fn verify_holder_commitment_tx<'a, T: secp256k1::Signing + secp256k1::Verification>(&self, commitment_tx: &'a CommitmentTransaction, secp_ctx: &Secp256k1) -> TrustedCommitmentTransaction<'a> { - commitment_tx.verify(&self.inner.get_channel_parameters().as_holder_broadcastable(), - self.inner.pubkeys(), self.inner.counterparty_pubkeys(), secp_ctx) - .expect("derived different per-tx keys or built transaction") + commitment_tx.verify( + &self.inner.get_channel_parameters().unwrap().as_holder_broadcastable(), + self.inner.pubkeys(), self.inner.counterparty_pubkeys().unwrap(), secp_ctx + ).expect("derived different per-tx keys or built transaction") } } diff --git a/lightning/src/util/test_utils.rs b/lightning/src/util/test_utils.rs index c1312c55e94..6614df4c9a6 100644 --- a/lightning/src/util/test_utils.rs +++ b/lightning/src/util/test_utils.rs @@ -140,10 +140,10 @@ impl<'a> Router for TestRouter<'a> { // Since the path is reversed, the last element in our iteration is the first // hop. if idx == path.hops.len() - 1 { - scorer.channel_penalty_msat(hop.short_channel_id, &NodeId::from_pubkey(payer), &NodeId::from_pubkey(&hop.pubkey), usage, &()); + scorer.channel_penalty_msat(hop.short_channel_id, &NodeId::from_pubkey(payer), &NodeId::from_pubkey(&hop.pubkey), usage, &Default::default()); } else { let curr_hop_path_idx = path.hops.len() - 1 - idx; - scorer.channel_penalty_msat(hop.short_channel_id, &NodeId::from_pubkey(&path.hops[curr_hop_path_idx - 1].pubkey), &NodeId::from_pubkey(&hop.pubkey), usage, &()); + scorer.channel_penalty_msat(hop.short_channel_id, &NodeId::from_pubkey(&path.hops[curr_hop_path_idx - 1].pubkey), &NodeId::from_pubkey(&hop.pubkey), usage, &Default::default()); } } } @@ -153,7 +153,7 @@ impl<'a> Router for TestRouter<'a> { let logger = TestLogger::new(); find_route( payer, params, &self.network_graph, first_hops, &logger, - &ScorerAccountingForInFlightHtlcs::new(self.scorer.read().unwrap(), &inflight_htlcs), &(), + &ScorerAccountingForInFlightHtlcs::new(self.scorer.read().unwrap(), &inflight_htlcs), &Default::default(), &[42; 32] ) } @@ -423,7 +423,7 @@ impl chainmonitor::Persist fo chain::ChannelMonitorUpdateStatus::Completed } - fn update_persisted_channel(&self, funding_txo: OutPoint, update: Option<&channelmonitor::ChannelMonitorUpdate>, _data: &channelmonitor::ChannelMonitor, update_id: MonitorUpdateId) -> chain::ChannelMonitorUpdateStatus { + fn update_persisted_channel(&self, funding_txo: OutPoint, _update: Option<&channelmonitor::ChannelMonitorUpdate>, _data: &channelmonitor::ChannelMonitor, update_id: MonitorUpdateId) -> chain::ChannelMonitorUpdateStatus { let mut ret = chain::ChannelMonitorUpdateStatus::Completed; if let Some(update_ret) = self.update_rets.lock().unwrap().pop_front() { ret = update_ret; diff --git a/pending_changelog/custom_tlv_downgrade.txt b/pending_changelog/custom_tlv_downgrade.txt deleted file mode 100644 index 54a36ca28e0..00000000000 --- a/pending_changelog/custom_tlv_downgrade.txt +++ /dev/null @@ -1,3 +0,0 @@ -## Backwards Compatibility - -* Since the addition of custom HTLC TLV support in 0.0.117, if you downgrade you may unintentionally accept payments with features you don't understand. diff --git a/pending_changelog/kvstore.txt b/pending_changelog/kvstore.txt deleted file mode 100644 index 3fe949500e6..00000000000 --- a/pending_changelog/kvstore.txt +++ /dev/null @@ -1,3 +0,0 @@ -## Backwards Compatibility - -* Users migrating custom persistence backends from the pre-v0.0.117 `KVStorePersister` interface can use a concatenation of `[{primary_namespace}/[{secondary_namespace}/]]{key}` to recover a `key` compatible with the data model previously assumed by `KVStorePersister::persist`. diff --git a/pending_changelog/monitorupdatingpersister.txt b/pending_changelog/monitorupdatingpersister.txt deleted file mode 100644 index 24d63ffe526..00000000000 --- a/pending_changelog/monitorupdatingpersister.txt +++ /dev/null @@ -1,5 +0,0 @@ -## Backwards Compatibility - -* The `MonitorUpdatingPersister` can read monitors stored conventionally, such as with the `KVStorePersister` from previous LDK versions. You can use this to migrate _to_ the `MonitorUpdatingPersister`; just "point" `MonitorUpdatingPersister` to existing, fully updated `ChannelMonitors`, and it will read them and work from there. However, downgrading is more complex. Monitors stored with `MonitorUpdatingPersister` have a prepended sentinel value that prevents them from being deserialized by previous `Persist` implementations. This is to ensure that they are not accidentally read and used while pending updates are still stored and not applied, as this could result in penalty transactions. Users who wish to downgrade should perform the following steps: - * Make a backup copy of all channel state. - * Ensure all updates are applied to the monitors. This may be done by loading all the existing data with the `MonitorUpdatingPersister::read_all_channel_monitors_with_updates` function. You can then write the resulting `ChannelMonitor`s using your previous `Persist` implementation. \ No newline at end of file diff --git a/pending_changelog/move_netaddress_to_socketaddress.txt b/pending_changelog/move_netaddress_to_socketaddress.txt deleted file mode 100644 index 5153ed1d035..00000000000 --- a/pending_changelog/move_netaddress_to_socketaddress.txt +++ /dev/null @@ -1 +0,0 @@ -* The `NetAddress` has been moved to `SocketAddress`. The fieds `IPv4` and `IPv6` are also rename to `TcpIpV4` and `TcpIpV6` (#2358). diff --git a/pending_changelog/new_channel_id_type_pr_2485.txt b/pending_changelog/new_channel_id_type_pr_2485.txt deleted file mode 100644 index 4ae3c2c6237..00000000000 --- a/pending_changelog/new_channel_id_type_pr_2485.txt +++ /dev/null @@ -1 +0,0 @@ -* In several APIs, `channel_id` parameters have been changed from type `[u8; 32]` to newly introduced `ChannelId` type, from `ln` namespace (`lightning::ln::ChannelId`) (PR #2485) diff --git a/pending_changelog/routes_route_params.txt b/pending_changelog/routes_route_params.txt deleted file mode 100644 index e88a1c78116..00000000000 --- a/pending_changelog/routes_route_params.txt +++ /dev/null @@ -1,3 +0,0 @@ -# Backwards Compatibility - -- `Route` objects written with LDK versions prior to 0.0.117 won't be retryable after being deserialized with LDK 0.0.117 or above.