rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5 heartwood9e28c1c56c39ab567352f7a9dc994850e68e4a8b
{
"request": "trigger",
"version": 1,
"event_type": "patch",
"repository": {
"id": "rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5",
"name": "heartwood",
"description": "Radicle Heartwood Protocol & Stack",
"private": false,
"default_branch": "master",
"delegates": [
"did:key:z6MksFqXN3Yhqk8pTJdUGLwATkRfQvwZXPqR2qMEhbS9wzpT",
"did:key:z6MktaNvN1KVFMkSRAiN4qK5yvX1zuEEaseeX5sffhzPZRZW",
"did:key:z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM",
"did:key:z6MkgFq6z5fkF2hioLLSNu1zP2qEL1aHXHZzGH1FLFGAnBGz",
"did:key:z6MkkPvBfjP4bQmco5Dm7UGsX2ruDBieEHi8n9DVJWX5sTEz"
]
},
"action": "Updated",
"patch": {
"id": "806043519e7439bebe26a66b247025ef5a7f8ef0",
"author": {
"id": "did:key:z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM",
"alias": "fintohaps"
},
"title": "Sans-IO Fetcher",
"state": {
"status": "open",
"conflicts": []
},
"before": "91eb6fc078727337449c203b8cf54aba4f40d816",
"after": "9e28c1c56c39ab567352f7a9dc994850e68e4a8b",
"commits": [
"9e28c1c56c39ab567352f7a9dc994850e68e4a8b",
"2663fe96ae621d79af42bc870c417706ccfcd63d",
"d57e6ac0bc7af8653d95839da988b9bf20df0586",
"ae8d9a10ab224d11898af29d7e7436c9d3fae293"
],
"target": "91eb6fc078727337449c203b8cf54aba4f40d816",
"labels": [],
"assignees": [],
"revisions": [
{
"id": "806043519e7439bebe26a66b247025ef5a7f8ef0",
"author": {
"id": "did:key:z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM",
"alias": "fintohaps"
},
"description": "This patch series introduces a new family of types for keeping track\nof fetch state in the protocol.\n\nThis consolidates this tracking of state into one place, and removes\nit from the connection session data.\n\nIt uses sans-IO patterns so that the state transitions can be more\neasily tested without relying on complicated setup logic.\n\nThis data is then wired up to the `Service` to maintain the same (or\nbest as possible) semantics for fetching in the running node.\n\nNote there are some breaking changes due to the removal of the\n`fetching` state from the `State` type \u2013 which in turn was is used in\n`Seeds`.",
"base": "352c29c23ce2560750369aa50bc9f43bf3019d3f",
"oid": "996209f1096a782f9ccfffb4e7bd98af2c4e1996",
"timestamp": 1765631088
},
{
"id": "bb748d02470ecca76543aae8a78a78ae13e4ab5e",
"author": {
"id": "did:key:z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM",
"alias": "fintohaps"
},
"description": "Rebase",
"base": "3168107df942dc71605e4fa25069569a43d467e9",
"oid": "400573526421cebe8b5b4e3bfd1dbd5f349da147",
"timestamp": 1767712546
},
{
"id": "92cb7d9cfe2f649df45dea498c33679bb17a17af",
"author": {
"id": "did:key:z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM",
"alias": "fintohaps"
},
"description": "Review",
"base": "02318f199c6f29a2eede1f282e1f9b99927d27ec",
"oid": "bd3e5fca713a0f0f107786d56f1fb9a533db09d7",
"timestamp": 1767966060
},
{
"id": "8d72c9767474cb16439abed201a9ae1878163ef9",
"author": {
"id": "did:key:z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM",
"alias": "fintohaps"
},
"description": "Changes:\n- Rebase\n- Use `radicle_core` for importing `NodeId` and `RepoId`\n- Create module structures for tests \u2013 removing header comments\n- Squash CHANGELOG entry into relevant commit",
"base": "02318f199c6f29a2eede1f282e1f9b99927d27ec",
"oid": "6e0692a9a815855e7fe091cc6f0838980f3e3ce3",
"timestamp": 1768320971
},
{
"id": "f1a778a3f455c36d89b90dfe2b6ec57ae54cfbd1",
"author": {
"id": "did:key:z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM",
"alias": "fintohaps"
},
"description": "Changes:\n- Rebase\n- Adjust log levels",
"base": "d2ab7b1b46935c95a46d0e7ddac3130b595eb15a",
"oid": "9cdd08ab83c92a212dbe52f93908a26f15879afd",
"timestamp": 1769446890
},
{
"id": "ed990d0462b2d37db944b7d90d6d612a541caee2",
"author": {
"id": "did:key:z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM",
"alias": "fintohaps"
},
"description": "Changes:\n- Adopt Lorenz's commit messages",
"base": "d2ab7b1b46935c95a46d0e7ddac3130b595eb15a",
"oid": "487c54647184ef3de717d52116433e1d0e3f7523",
"timestamp": 1769449964
},
{
"id": "521c5dfcde97e370c3e9d037c7e29f6a4a15a595",
"author": {
"id": "did:key:z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM",
"alias": "fintohaps"
},
"description": "Rebase",
"base": "91eb6fc078727337449c203b8cf54aba4f40d816",
"oid": "9e28c1c56c39ab567352f7a9dc994850e68e4a8b",
"timestamp": 1769468013
}
]
}
}
{
"response": "triggered",
"run_id": {
"id": "4abdf646-ecbb-43f1-9982-11fb433438f7"
},
"info_url": "https://cci.rad.levitte.org//4abdf646-ecbb-43f1-9982-11fb433438f7.html"
}
Started at: 2026-01-26 23:53:35.739914+01:00
Commands:
$ rad clone rad:z3gqcJUoA1n9HaHKufZs5FCSGazv5 .
✓ Creating checkout in ./...
✓ Remote cloudhead@z6MksFqXN3Yhqk8pTJdUGLwATkRfQvwZXPqR2qMEhbS9wzpT added
✓ Remote-tracking branch cloudhead@z6MksFqXN3Yhqk8pTJdUGLwATkRfQvwZXPqR2qMEhbS9wzpT/master created for z6MksFqXN3Yhqk8pTJdUGLwATkRfQvwZXPqR2qMEhbS9wzpT
✓ Remote cloudhead@z6MktaNvN1KVFMkSRAiN4qK5yvX1zuEEaseeX5sffhzPZRZW added
✓ Remote-tracking branch cloudhead@z6MktaNvN1KVFMkSRAiN4qK5yvX1zuEEaseeX5sffhzPZRZW/master created for z6MktaNvN1KVFMkSRAiN4qK5yvX1zuEEaseeX5sffhzPZRZW
✓ Remote fintohaps@z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM added
✓ Remote-tracking branch fintohaps@z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM/master created for z6MkireRatUThvd3qzfKht1S44wpm4FEWSSa4PRMTSQZ3voM
✓ Remote erikli@z6MkgFq6z5fkF2hioLLSNu1zP2qEL1aHXHZzGH1FLFGAnBGz added
✓ Remote-tracking branch erikli@z6MkgFq6z5fkF2hioLLSNu1zP2qEL1aHXHZzGH1FLFGAnBGz/master created for z6MkgFq6z5fkF2hioLLSNu1zP2qEL1aHXHZzGH1FLFGAnBGz
✓ Remote lorenz@z6MkkPvBfjP4bQmco5Dm7UGsX2ruDBieEHi8n9DVJWX5sTEz added
✓ Remote-tracking branch lorenz@z6MkkPvBfjP4bQmco5Dm7UGsX2ruDBieEHi8n9DVJWX5sTEz/master created for z6MkkPvBfjP4bQmco5Dm7UGsX2ruDBieEHi8n9DVJWX5sTEz
✓ Repository successfully cloned under /opt/radcis/ci.rad.levitte.org/cci/state/4abdf646-ecbb-43f1-9982-11fb433438f7/w/
╭────────────────────────────────────╮
│ heartwood │
│ Radicle Heartwood Protocol & Stack │
│ 136 issues · 16 patches │
╰────────────────────────────────────╯
Run `cd ./.` to go to the repository directory.
Exit code: 0
$ rad patch checkout 806043519e7439bebe26a66b247025ef5a7f8ef0
✓ Switched to branch patch/8060435 at revision 521c5df
✓ Branch patch/8060435 setup to track rad/patches/806043519e7439bebe26a66b247025ef5a7f8ef0
Exit code: 0
$ git config advice.detachedHead false
Exit code: 0
$ git checkout 9e28c1c56c39ab567352f7a9dc994850e68e4a8b
HEAD is now at 9e28c1c5 protocol/service: Wire up `FetcherService`
Exit code: 0
$ rad patch show 806043519e7439bebe26a66b247025ef5a7f8ef0 -p
╭─────────────────────────────────────────────────────────────────────────╮
│ Title Sans-IO Fetcher │
│ Patch 806043519e7439bebe26a66b247025ef5a7f8ef0 │
│ Author fintohaps z6Mkire…SQZ3voM │
│ Head 9e28c1c56c39ab567352f7a9dc994850e68e4a8b │
│ Base 91eb6fc078727337449c203b8cf54aba4f40d816 │
│ Branches patch/8060435 │
│ Commits ahead 4, behind 0 │
│ Status open │
│ │
│ This patch series introduces a new family of types for keeping track │
│ of fetch state in the protocol. │
│ │
│ This consolidates this tracking of state into one place, and removes │
│ it from the connection session data. │
│ │
│ It uses sans-IO patterns so that the state transitions can be more │
│ easily tested without relying on complicated setup logic. │
│ │
│ This data is then wired up to the `Service` to maintain the same (or │
│ best as possible) semantics for fetching in the running node. │
│ │
│ Note there are some breaking changes due to the removal of the │
│ `fetching` state from the `State` type – which in turn was is used in │
│ `Seeds`. │
├─────────────────────────────────────────────────────────────────────────┤
│ 9e28c1c protocol/service: Wire up `FetcherService` │
│ 2663fe9 protocol: Introduce `FetcherService` │
│ d57e6ac protocol: Introduce `FetcherState` │
│ ae8d9a1 radicle(storage/refs): derive Hash for RefsAt │
├─────────────────────────────────────────────────────────────────────────┤
│ ● Revision 8060435 @ 996209f by fintohaps z6Mkire…SQZ3voM 1 month ago │
│ ↑ Revision bb748d0 @ 4005735 by fintohaps z6Mkire…SQZ3voM 2 weeks ago │
│ ↑ Revision 92cb7d9 @ bd3e5fc by lorenz z6MkkPv…WX5sTEz 2 weeks ago │
│ ↑ Revision 8d72c97 @ 6e0692a by fintohaps z6Mkire…SQZ3voM 1 week ago │
│ ↑ Revision f1a778a @ 9cdd08a by fintohaps z6Mkire…SQZ3voM 5 hours ago │
│ ↑ Revision ed990d0 @ 487c546 by fintohaps z6Mkire…SQZ3voM 5 hours ago │
│ ↑ Revision 521c5df @ 9e28c1c by fintohaps z6Mkire…SQZ3voM 4 seconds ago │
╰─────────────────────────────────────────────────────────────────────────╯
commit 9e28c1c56c39ab567352f7a9dc994850e68e4a8b
Author: Fintan Halpenny <fintan.halpenny@gmail.com>
Date: Fri Aug 8 13:13:04 2025 +0100
protocol/service: Wire up `FetcherService`
Wire up the new `FetcherService`. This reduces the `Service` fetch
methods down to:
- `fetch`
- `fetched`
- `dequeue_fetches`
- `fetch_refs_at`
This simplifies the code by off-loading the logic to the `fetcher`
family of types, rather than handling the intricacies in the `Service`
itself.
The `Service` now just performs the necessary I/O based on the
returned events from the `fetcher`.
This also removes the fetching state from the node sessions. This
follows the single responsibility, and sessions now only care about
their connectivity.
Breaking changes:
- The `Connected` state of a peer no longer contains fetching
information, which was being returned as part of the JSON payload
result.
- The `rad debug` information for ongoing fetches contained the number
listeners awaiting for results, this was removed.
diff --git a/CHANGELOG.md b/CHANGELOG.md
index a125864d2..71296aa3d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -27,6 +27,14 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
pushed the default branch to the local user's namespace. The command is now
deprecated, and the user should use `git push` instead.
+## Breaking Changes
+
+- The `Connected` state of a peer no longer contains fetching information. This
+ information was returned when requesting for `Seeds` on the control socket.
+ Callers should no longer expect the `fetching` inside that JSON result.
+- The `rad debug` information for ongoing fetches contained the number of
+ subscribers awaiting for results, this was removed.
+
## 1.6.1
## Fixed Bugs
diff --git a/crates/radicle-node/src/runtime/handle.rs b/crates/radicle-node/src/runtime/handle.rs
index e695a3d1d..d866bf07b 100644
--- a/crates/radicle-node/src/runtime/handle.rs
+++ b/crates/radicle-node/src/runtime/handle.rs
@@ -350,20 +350,22 @@ impl radicle::node::Handle for Handle {
fn debug(&self) -> Result<serde_json::Value, Self::Error> {
let (sender, receiver) = chan::bounded(1);
let query: Arc<QueryState> = Arc::new(move |state| {
+ let fetcher_state = state.fetching();
let debug = serde_json::json!({
"outboxSize": state.outbox().len(),
- "fetching": state.fetching().iter().map(|(rid, state)| {
- json!({
- "rid": rid,
- "from": state.from,
- "refsAt": state.refs_at,
- "subscribers": state.subscribers.len(),
- })
- }).collect::<Vec<_>>(),
- "queue": state.sessions().values().map(|sess| {
+ "fetching": fetcher_state.active_fetches()
+ .iter()
+ .map(|(rid, active)| {
+ json!({
+ "rid": rid,
+ "from": active.from(),
+ "refsAt": active.refs_at(),
+ })
+ }).collect::<Vec<_>>(),
+ "queue": fetcher_state.queued_fetches().iter().map(|(node, queue)| {
json!({
- "nid": sess.id,
- "queue": sess.queue.iter().map(|fetch| {
+ "nid": node,
+ "queue": queue.iter().map(|fetch| {
json!({
"rid": fetch.rid,
"from": fetch.from,
diff --git a/crates/radicle-node/src/tests.rs b/crates/radicle-node/src/tests.rs
index 3f0c7922b..6f20ed443 100644
--- a/crates/radicle-node/src/tests.rs
+++ b/crates/radicle-node/src/tests.rs
@@ -1512,6 +1512,7 @@ fn test_queued_fetch_max_capacity() {
// Finish the 1st fetch.
alice.fetched(rid1, bob.id, Ok(fetch::FetchResult::new(doc.clone())));
+
// Now the 1st fetch is done, the 2nd fetch is dequeued.
assert_matches!(alice.fetches().next(), Some((rid, _)) if rid == rid2);
// ... but not the third.
diff --git a/crates/radicle-protocol/src/fetcher/state.rs b/crates/radicle-protocol/src/fetcher/state.rs
index 6f64ff6db..38b1c357a 100644
--- a/crates/radicle-protocol/src/fetcher/state.rs
+++ b/crates/radicle-protocol/src/fetcher/state.rs
@@ -280,8 +280,8 @@ impl Default for Config {
/// An active fetch represents a repository being fetched by a particular node.
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct ActiveFetch {
- pub(super) from: NodeId,
- pub(super) refs_at: Vec<RefsAt>,
+ pub from: NodeId,
+ pub refs_at: Vec<RefsAt>,
}
impl ActiveFetch {
diff --git a/crates/radicle-protocol/src/service.rs b/crates/radicle-protocol/src/service.rs
index b36c5e957..2dce4eca3 100644
--- a/crates/radicle-protocol/src/service.rs
+++ b/crates/radicle-protocol/src/service.rs
@@ -38,6 +38,9 @@ use radicle::storage::refs::SIGREFS_BRANCH;
use radicle::storage::RepositoryError;
use radicle_fetch::policy::SeedingPolicy;
+use crate::fetcher;
+use crate::fetcher::service::FetcherService;
+use crate::fetcher::FetcherState;
use crate::service::gossip::Store as _;
use crate::service::message::{
Announcement, AnnouncementMessage, Info, NodeAnnouncement, Ping, RefsAnnouncement, RefsStatus,
@@ -221,7 +224,9 @@ pub enum ConnectError {
SelfConnection,
#[error("outbound connection limit reached when attempting {nid} ({addr})")]
LimitReached { nid: NodeId, addr: Address },
- #[error("attempted connection to {nid}, via {addr} but addresses of this kind are not supported")]
+ #[error(
+ "attempted connection to {nid}, via {addr} but addresses of this kind are not supported"
+ )]
UnsupportedAddress { nid: NodeId, addr: Address },
}
@@ -301,25 +306,6 @@ pub enum CommandError {
Policy(#[from] policy::Error),
}
-/// Error returned by [`Service::try_fetch`].
-#[derive(thiserror::Error, Debug)]
-enum TryFetchError<'a> {
- #[error("ongoing fetch for repository exists")]
- AlreadyFetching(&'a mut FetchState),
- #[error("peer is not connected; cannot initiate fetch")]
- SessionNotConnected,
- #[error("peer fetch capacity reached; cannot initiate fetch")]
- SessionCapacityReached,
- #[error(transparent)]
- Namespaces(Box<NamespacesError>),
-}
-
-impl From<NamespacesError> for TryFetchError<'_> {
- fn from(e: NamespacesError) -> Self {
- Self::Namespaces(Box::new(e))
- }
-}
-
/// Fetch state for an ongoing fetch.
#[derive(Debug)]
pub struct FetchState {
@@ -331,15 +317,6 @@ pub struct FetchState {
pub subscribers: Vec<chan::Sender<FetchResult>>,
}
-impl FetchState {
- /// Add a subscriber to this fetch.
- fn subscribe(&mut self, c: chan::Sender<FetchResult>) {
- if !self.subscribers.iter().any(|s| s.same_channel(&c)) {
- self.subscribers.push(c);
- }
- }
-}
-
/// Holds all node stores.
#[derive(Debug)]
pub struct Stores<D>(D);
@@ -439,8 +416,7 @@ pub struct Service<D, S, G> {
inventory: InventoryAnnouncement,
/// Source of entropy.
rng: Rng,
- /// Ongoing fetches.
- fetching: HashMap<RepoId, FetchState>,
+ fetcher: FetcherService<chan::Sender<FetchResult>>,
/// Request/connection rate limiter.
limiter: RateLimiter,
/// Current seeded repositories bloom filter.
@@ -508,7 +484,15 @@ where
let last_timestamp = node.timestamp;
let clock = LocalTime::default(); // Updated on initialize.
let inventory = gossip::inventory(clock.into(), []); // Updated on initialize.
-
+ let fetcher = {
+ let config = fetcher::Config::new()
+ .with_max_concurrency(
+ std::num::NonZeroUsize::new(config.limits.fetch_concurrency.into())
+ .expect("fetch concurrency was zero, must be at least 1"),
+ )
+ .with_max_capacity(fetcher::MaxQueueSize::default());
+ FetcherService::new(config)
+ };
Self {
config,
storage,
@@ -522,7 +506,7 @@ where
outbox: Outbox::default(),
limiter,
sessions,
- fetching: HashMap::new(),
+ fetcher,
filter: Filter::empty(),
relayed_by: HashMap::default(),
last_idle: LocalTime::default(),
@@ -623,6 +607,10 @@ where
Events::from(self.emitter.subscribe())
}
+ pub fn fetcher(&self) -> &FetcherState {
+ self.fetcher.state()
+ }
+
/// Get I/O outbox.
pub fn outbox(&mut self) -> &mut Outbox {
&mut self.outbox
@@ -898,7 +886,7 @@ where
}
},
Command::Fetch(rid, seed, timeout, resp) => {
- self.fetch(rid, seed, timeout, Some(resp));
+ self.fetch(rid, seed, vec![], timeout, Some(resp));
}
Command::Seed(rid, scope, resp) => {
// Update our seeding policy.
@@ -990,7 +978,8 @@ where
if status.want.is_empty() {
debug!(target: "service", "Skipping fetch for {rid}, all refs are already in storage");
} else {
- return self._fetch(rid, from, status.want, timeout, channel);
+ self.fetch(rid, from, status.want, timeout, channel);
+ return true;
}
}
Err(e) => {
@@ -1001,247 +990,176 @@ where
false
}
- /// Initiate an outgoing fetch for some repository.
fn fetch(
- &mut self,
- rid: RepoId,
- from: NodeId,
- timeout: time::Duration,
- channel: Option<chan::Sender<FetchResult>>,
- ) -> bool {
- self._fetch(rid, from, vec![], timeout, channel)
- }
-
- fn _fetch(
&mut self,
rid: RepoId,
from: NodeId,
refs_at: Vec<RefsAt>,
timeout: time::Duration,
channel: Option<chan::Sender<FetchResult>>,
- ) -> bool {
- match self.try_fetch(rid, &from, refs_at.clone(), timeout) {
- Ok(fetching) => {
+ ) {
+ let session = {
+ let reason = format!("peer {from} is not connected; cannot initiate fetch");
+ let Some(session) = self.sessions.get_mut(&from) else {
if let Some(c) = channel {
- fetching.subscribe(c);
+ c.send(FetchResult::Failed { reason }).ok();
}
- return true;
- }
- Err(TryFetchError::AlreadyFetching(fetching)) => {
- // If we're already fetching the same refs from the requested peer, there's nothing
- // to do, we simply add the supplied channel to the list of subscribers so that it
- // is notified on completion. Otherwise, we queue a fetch with the requested peer.
- if fetching.from == from && fetching.refs_at == refs_at {
- debug!(target: "service", "Ignoring redundant fetch of {rid} from {from}");
-
- if let Some(c) = channel {
- fetching.subscribe(c);
- }
- } else {
- let fetch = QueuedFetch {
- rid,
- refs_at,
- from,
- timeout,
- channel,
- };
- debug!(target: "service", "Queueing fetch for {rid} with {from} (already fetching)..");
-
- self.queue_fetch(fetch);
- }
- }
- Err(TryFetchError::SessionCapacityReached) => {
- debug!(target: "service", "Fetch capacity reached for {from}, queueing {rid}..");
- self.queue_fetch(QueuedFetch {
- rid,
- refs_at,
- from,
- timeout,
- channel,
- });
- }
- Err(e) => {
+ return;
+ };
+ if !session.is_connected() {
if let Some(c) = channel {
- c.send(FetchResult::Failed {
- reason: e.to_string(),
- })
- .ok();
+ c.send(FetchResult::Failed { reason }).ok();
}
+ return;
}
- }
- false
- }
-
- fn queue_fetch(&mut self, fetch: QueuedFetch) {
- let Some(s) = self.sessions.get_mut(&fetch.from) else {
- log::debug!(target: "service", "Cannot queue fetch for unknown session {}", fetch.from);
- return;
+ session
};
- if let Err(e) = s.queue_fetch(fetch) {
- let fetch = e.inner();
- log::debug!(target: "service", "Unable to queue fetch for {} with {}: {e}", &fetch.rid, &fetch.from);
- }
- }
- // TODO: Buffer/throttle fetches.
- fn try_fetch(
- &mut self,
- rid: RepoId,
- from: &NodeId,
- refs_at: Vec<RefsAt>,
- timeout: time::Duration,
- ) -> Result<&mut FetchState, TryFetchError<'_>> {
- let from = *from;
- let Some(session) = self.sessions.get_mut(&from) else {
- return Err(TryFetchError::SessionNotConnected);
- };
- let fetching = self.fetching.entry(rid);
-
- trace!(target: "service", "Trying to fetch {refs_at:?} for {rid}..");
-
- let fetching = match fetching {
- Entry::Vacant(fetching) => fetching,
- Entry::Occupied(fetching) => {
- // We're already fetching this repo from some peer.
- return Err(TryFetchError::AlreadyFetching(fetching.into_mut()));
- }
- };
- // Sanity check: We shouldn't be fetching from this session, since we return above if we're
- // fetching from any session.
- debug_assert!(!session.is_fetching(&rid));
-
- if !session.is_connected() {
- // This can happen if a session disconnects in the time between asking for seeds to
- // fetch from, and initiating the fetch from one of those seeds.
- return Err(TryFetchError::SessionNotConnected);
- }
- if session.is_at_capacity() {
- // If we're already fetching multiple repos from this peer.
- return Err(TryFetchError::SessionCapacityReached);
- }
-
- let fetching = fetching.insert(FetchState {
+ let cmd = fetcher::state::command::Fetch {
from,
- refs_at: refs_at.clone(),
- subscribers: vec![],
- });
- self.outbox.fetch(
- session,
rid,
refs_at,
timeout,
- self.config.limits.fetch_pack_receive,
- );
+ };
+ let fetcher::service::FetchInitiated { event, rejected } = self.fetcher.fetch(cmd, channel);
- Ok(fetching)
+ if let Some(c) = rejected {
+ c.send(FetchResult::Failed {
+ reason: "fetch queue at capacity".to_string(),
+ })
+ .ok();
+ }
+
+ match event {
+ fetcher::state::event::Fetch::Started {
+ rid,
+ from,
+ refs_at,
+ timeout,
+ } => {
+ debug!(target: "service", "Starting fetch for {rid} from {from}");
+ self.outbox.fetch(
+ session,
+ rid,
+ refs_at,
+ timeout,
+ self.config.limits.fetch_pack_receive,
+ );
+ }
+ fetcher::state::event::Fetch::Queued { rid, from } => {
+ debug!(target: "service", "Queued fetch for {rid} from {from}");
+ }
+ fetcher::state::event::Fetch::AlreadyFetching { rid, from } => {
+ debug!(target: "service", "Already fetching {rid} from {from}");
+ }
+ fetcher::state::event::Fetch::QueueAtCapacity { rid, from, .. } => {
+ debug!(target: "service", "Queue at capacity for {from}, rejected {rid}");
+ }
+ }
}
pub fn fetched(
&mut self,
rid: RepoId,
- remote: NodeId,
+ from: NodeId,
result: Result<crate::worker::fetch::FetchResult, crate::worker::FetchError>,
) {
- let Some(fetching) = self.fetching.remove(&rid) else {
- debug!(target: "service", "Received unexpected fetch result for {rid}, from {remote}");
- return;
- };
- debug_assert_eq!(fetching.from, remote);
-
- if let Some(s) = self.sessions.get_mut(&remote) {
- // Mark this RID as fetched for this session.
- s.fetched(rid);
- }
+ let cmd = fetcher::state::command::Fetched { from, rid };
+ let fetcher::service::FetchCompleted { event, subscribers } = self.fetcher.fetched(cmd);
- // Notify all fetch subscribers of the fetch result. This is used when the user requests
- // a fetch via the CLI, for example.
- for sub in &fetching.subscribers {
- debug!(target: "service", "Found existing fetch request from {remote}, sending result..");
+ // Dequeue next fetches
+ self.dequeue_fetches();
- let result = match &result {
- Ok(success) => FetchResult::Success {
- updated: success.updated.clone(),
- namespaces: success.namespaces.clone(),
- clone: success.clone,
- },
- Err(e) => FetchResult::Failed {
- reason: e.to_string(),
- },
- };
- if sub.send(result).is_err() {
- debug!(target: "service", "Failed to send fetch result for {rid} from {remote}..");
- } else {
- debug!(target: "service", "Sent fetch result for {rid} from {remote}..");
- }
- }
-
- match result {
- Ok(crate::worker::fetch::FetchResult {
- updated,
- canonical,
- namespaces,
- clone,
- doc,
- }) => {
- info!(target: "service", "Fetched {rid} from {remote} successfully");
- // Update our routing table in case this fetch was user-initiated and doesn't
- // come from an announcement.
- self.seed_discovered(rid, remote, self.clock.into());
-
- for update in &updated {
- if update.is_skipped() {
- trace!(target: "service", "Ref skipped: {update} for {rid}");
- } else {
- debug!(target: "service", "Ref updated: {update} for {rid}");
- }
+ match event {
+ fetcher::state::event::Fetched::NotFound { from, rid } => {
+ debug!(target: "service", "Unexpected fetch result for {rid} from {from}");
+ }
+ fetcher::state::event::Fetched::Completed {
+ from,
+ rid,
+ refs_at: _,
+ } => {
+ // Notify responders
+ let fetch_result = match &result {
+ Ok(success) => FetchResult::Success {
+ updated: success.updated.clone(),
+ namespaces: success.namespaces.clone(),
+ clone: success.clone,
+ },
+ Err(e) => FetchResult::Failed {
+ reason: e.to_string(),
+ },
+ };
+ for responder in subscribers {
+ responder.send(fetch_result.clone()).ok();
}
- self.emitter.emit(Event::RefsFetched {
- remote,
- rid,
- updated: updated.clone(),
- });
- self.emitter
- .emit_all(canonical.into_iter().map(|(refname, target)| {
- Event::CanonicalRefUpdated {
- rid,
- refname,
- target,
+ match result {
+ Ok(crate::worker::fetch::FetchResult {
+ updated,
+ canonical,
+ namespaces,
+ clone,
+ doc,
+ }) => {
+ info!(target: "service", "Fetched {rid} from {from} successfully");
+ // Update our routing table in case this fetch was user-initiated and doesn't
+ // come from an announcement.
+ self.seed_discovered(rid, from, self.clock.into());
+
+ for update in &updated {
+ if update.is_skipped() {
+ trace!(target: "service", "Ref skipped: {update} for {rid}");
+ } else {
+ debug!(target: "service", "Ref updated: {update} for {rid}");
+ }
}
- }));
+ self.emitter.emit(Event::RefsFetched {
+ remote: from,
+ rid,
+ updated: updated.clone(),
+ });
+ self.emitter
+ .emit_all(canonical.into_iter().map(|(refname, target)| {
+ Event::CanonicalRefUpdated {
+ rid,
+ refname,
+ target,
+ }
+ }));
- // Announce our new inventory if this fetch was a full clone.
- // Only update and announce inventory for public repositories.
- if clone && doc.is_public() {
- debug!(target: "service", "Updating and announcing inventory for cloned repository {rid}..");
+ // Announce our new inventory if this fetch was a full clone.
+ // Only update and announce inventory for public repositories.
+ if clone && doc.is_public() {
+ debug!(target: "service", "Updating and announcing inventory for cloned repository {rid}..");
- if let Err(e) = self.add_inventory(rid) {
- warn!(target: "service", "Failed to announce inventory for {rid}: {e}");
- }
- }
+ if let Err(e) = self.add_inventory(rid) {
+ warn!(target: "service", "Failed to announce inventory for {rid}: {e}");
+ }
+ }
- // It's possible for a fetch to succeed but nothing was updated.
- if updated.is_empty() || updated.iter().all(|u| u.is_skipped()) {
- debug!(target: "service", "Nothing to announce, no refs were updated..");
- } else {
- // Finally, announce the refs. This is useful for nodes to know what we've synced,
- // beyond just knowing that we have added an item to our inventory.
- if let Err(e) = self.announce_refs(rid, doc.into(), namespaces, false) {
- warn!(target: "service", "Failed to announce new refs: {e}");
+ // It's possible for a fetch to succeed but nothing was updated.
+ if updated.is_empty() || updated.iter().all(|u| u.is_skipped()) {
+ debug!(target: "service", "Nothing to announce, no refs were updated..");
+ } else {
+ // Finally, announce the refs. This is useful for nodes to know what we've synced,
+ // beyond just knowing that we have added an item to our inventory.
+ if let Err(e) = self.announce_refs(rid, doc.into(), namespaces, false) {
+ warn!(target: "service", "Failed to announce new refs: {e}");
+ }
+ }
}
- }
- }
- Err(err) => {
- warn!(target: "service", "Fetch failed for {rid} from {remote}: {err}");
+ Err(err) => {
+ warn!(target: "service", "Fetch failed for {rid} from {from}: {err}");
- // For now, we only disconnect the remote in case of timeout. In the future,
- // there may be other reasons to disconnect.
- if err.is_timeout() {
- self.outbox.disconnect(remote, DisconnectReason::Fetch(err));
+ // For now, we only disconnect the from in case of timeout. In the future,
+ // there may be other reasons to disconnect.
+ if err.is_timeout() {
+ self.outbox.disconnect(from, DisconnectReason::Fetch(err));
+ }
+ }
}
}
}
- // We can now try to dequeue more fetches.
- self.dequeue_fetches();
}
/// Attempt to dequeue fetches from all peers.
@@ -1258,38 +1176,42 @@ where
.map(|(k, _)| *k)
.collect::<Vec<_>>();
- // Try to dequeue once per session.
for nid in sessions {
- // SAFETY: All the keys we are iterating on exist.
#[allow(clippy::unwrap_used)]
let sess = self.sessions.get_mut(&nid).unwrap();
- if !sess.is_connected() || sess.is_at_capacity() {
+ if !sess.is_connected() {
continue;
}
- if let Some(QueuedFetch {
+ let Some(fetcher::QueuedFetch {
rid,
from,
refs_at,
timeout,
- channel,
- }) = sess.dequeue_fetch()
- {
- debug!(target: "service", "Dequeued fetch for {rid} from session {from}..");
+ }) = self.fetcher.dequeue(&nid)
+ else {
+ continue;
+ };
- if let Some(refs) = NonEmpty::from_vec(refs_at) {
- let repo_entry = self.policies.seed_policy(&rid).expect(
- "Service::dequeue_fetch: error accessing repo seeding configuration",
- );
- let SeedingPolicy::Allow { scope } = repo_entry.policy else {
- debug!(target: "service", "Repository {rid} is no longer seeded, skipping..");
- continue;
- };
- self.fetch_refs_at(rid, from, refs, scope, timeout, channel);
- } else {
- // If no refs are specified, always do a full fetch.
- self.fetch(rid, from, timeout, channel);
- }
+ // Check seeding policy
+ let repo_entry = self
+ .policies
+ .seed_policy(&rid)
+ .expect("error accessing repo seeding configuration");
+
+ let SeedingPolicy::Allow { scope } = repo_entry.policy else {
+ debug!(target: "service", "Repository {} no longer seeded, skipping", rid);
+ continue;
+ };
+
+ debug!(target: "service", "Dequeued fetch for {} from {}", rid, from);
+
+ // Channel is `None` in both cases since they will already be
+ // registered with the fetcher service.
+ if let Some(refs) = NonEmpty::from_vec(refs_at.clone()) {
+ self.fetch_refs_at(rid, from, refs, scope, timeout, None);
+ } else {
+ self.fetch(rid, from, refs_at, timeout, None);
}
}
}
@@ -1391,7 +1313,6 @@ where
self.config.is_persistent(&remote),
self.rng.clone(),
self.clock,
- self.config.limits.clone(),
));
self.outbox.write_all(peer, msgs);
}
@@ -1422,19 +1343,30 @@ where
let link = session.link;
let addr = session.addr.clone();
- self.fetching.retain(|_, fetching| {
- if fetching.from != remote {
- return true;
+ let cmd = fetcher::state::command::Cancel { from: remote };
+ let fetcher::service::FetchesCancelled { event, orphaned } = self.fetcher.cancel(cmd);
+
+ match event {
+ fetcher::state::event::Cancel::Unexpected { from } => {
+ debug!(target: "service", "No fetches to cancel for {from}");
}
- // Remove and fail any pending fetches from this remote node.
- for resp in &fetching.subscribers {
- resp.send(FetchResult::Failed {
- reason: format!("disconnected: {reason}"),
+ fetcher::state::event::Cancel::Canceled {
+ from,
+ active,
+ queued,
+ } => {
+ debug!(target: "service", "Cancelled {} ongoing, {} queued for {from}", active.len(), queued.len());
+ }
+ }
+
+ // Notify orphaned responders
+ for (rid, responder) in orphaned {
+ responder
+ .send(FetchResult::Failed {
+ reason: format!("failed fetch to {rid}, peer disconnected: {reason}"),
})
.ok();
- }
- false
- });
+ }
// Attempt to re-connect to persistent peers.
if self.config.is_persistent(&remote) {
@@ -1651,7 +1583,7 @@ where
for rid in missing {
debug!(target: "service", "Missing seeded inventory {rid}; initiating fetch..");
- self.fetch(rid, *announcer, FETCH_TIMEOUT, None);
+ self.fetch(rid, *announcer, vec![], FETCH_TIMEOUT, None);
}
return Ok(relay);
}
@@ -2284,13 +2216,7 @@ where
}
self.sessions.insert(
nid,
- Session::outbound(
- nid,
- addr.clone(),
- persistent,
- self.rng.clone(),
- self.config.limits.clone(),
- ),
+ Session::outbound(nid, addr.clone(), persistent, self.rng.clone()),
);
self.outbox.connect(nid, addr);
@@ -2563,7 +2489,7 @@ where
Ok(seeds) => {
if let Some(connected) = NonEmpty::from_vec(seeds.connected().collect()) {
for seed in connected {
- self.fetch(rid, seed.nid, FETCH_TIMEOUT, None);
+ self.fetch(rid, seed.nid, vec![], FETCH_TIMEOUT, None);
}
} else {
// TODO: We should make sure that this fetch is retried later, either
@@ -2717,7 +2643,7 @@ pub trait ServiceState {
/// Get the existing sessions.
fn sessions(&self) -> &Sessions;
/// Get fetch state.
- fn fetching(&self) -> &HashMap<RepoId, FetchState>;
+ fn fetching(&self) -> &FetcherState;
/// Get outbox.
fn outbox(&self) -> &Outbox;
/// Get rate limiter.
@@ -2750,8 +2676,8 @@ where
&self.sessions
}
- fn fetching(&self) -> &HashMap<RepoId, FetchState> {
- &self.fetching
+ fn fetching(&self) -> &FetcherState {
+ self.fetcher.state()
}
fn outbox(&self) -> &Outbox {
diff --git a/crates/radicle-protocol/src/service/io.rs b/crates/radicle-protocol/src/service/io.rs
index 1862c11d4..92b7b0e74 100644
--- a/crates/radicle-protocol/src/service/io.rs
+++ b/crates/radicle-protocol/src/service/io.rs
@@ -138,8 +138,6 @@ impl Outbox {
timeout: time::Duration,
reader_limit: FetchPackSizeLimit,
) {
- peer.fetching(rid);
-
let refs_at = (!refs_at.is_empty()).then_some(refs_at);
if let Some(refs_at) = &refs_at {
diff --git a/crates/radicle-protocol/src/service/session.rs b/crates/radicle-protocol/src/service/session.rs
index 665576563..121805cae 100644
--- a/crates/radicle-protocol/src/service/session.rs
+++ b/crates/radicle-protocol/src/service/session.rs
@@ -1,8 +1,7 @@
-use std::collections::{HashSet, VecDeque};
+use std::collections::VecDeque;
use std::{fmt, time};
use crossbeam_channel as chan;
-use radicle::node::config::Limits;
use radicle::node::{FetchResult, Severity};
use radicle::node::{Link, Timestamp};
pub use radicle::node::{PingState, State};
@@ -111,8 +110,6 @@ pub struct Session {
pub subscribe: Option<message::Subscribe>,
/// Last time a message was received from the peer.
pub last_active: LocalTime,
- /// Fetch queue.
- pub queue: VecDeque<QueuedFetch>,
/// Connection attempts. For persistent peers, Tracks
/// how many times we've attempted to connect. We reset this to zero
@@ -120,8 +117,6 @@ pub struct Session {
attempts: usize,
/// Source of entropy.
rng: Rng,
- /// Protocol limits.
- limits: Limits,
}
impl fmt::Display for Session {
@@ -159,7 +154,7 @@ impl From<&Session> for radicle::node::Session {
}
impl Session {
- pub fn outbound(id: NodeId, addr: Address, persistent: bool, rng: Rng, limits: Limits) -> Self {
+ pub fn outbound(id: NodeId, addr: Address, persistent: bool, rng: Rng) -> Self {
Self {
id,
addr,
@@ -168,28 +163,18 @@ impl Session {
subscribe: None,
persistent,
last_active: LocalTime::default(),
- queue: VecDeque::with_capacity(MAX_FETCH_QUEUE_SIZE),
attempts: 1,
rng,
- limits,
}
}
- pub fn inbound(
- id: NodeId,
- addr: Address,
- persistent: bool,
- rng: Rng,
- time: LocalTime,
- limits: Limits,
- ) -> Self {
+ pub fn inbound(id: NodeId, addr: Address, persistent: bool, rng: Rng, time: LocalTime) -> Self {
Self {
id,
addr,
state: State::Connected {
since: time,
ping: PingState::default(),
- fetching: HashSet::default(),
latencies: VecDeque::default(),
stable: false,
},
@@ -197,10 +182,8 @@ impl Session {
subscribe: None,
persistent,
last_active: time,
- queue: VecDeque::new(),
attempts: 0,
rng,
- limits,
}
}
@@ -224,41 +207,6 @@ impl Session {
matches!(self.state, State::Initial)
}
- pub fn is_at_capacity(&self) -> bool {
- if let State::Connected { fetching, .. } = &self.state {
- if fetching.len() >= self.limits.fetch_concurrency.into() {
- return true;
- }
- }
- false
- }
-
- pub fn is_fetching(&self, rid: &RepoId) -> bool {
- if let State::Connected { fetching, .. } = &self.state {
- return fetching.contains(rid);
- }
- false
- }
-
- /// Queue a fetch. Returns `true` if it was added to the queue, and `false` if
- /// it already was present in the queue.
- pub fn queue_fetch(&mut self, fetch: QueuedFetch) -> Result<(), QueueError> {
- assert_eq!(fetch.from, self.id);
-
- if self.queue.len() >= MAX_FETCH_QUEUE_SIZE {
- return Err(QueueError::CapacityReached(fetch));
- } else if self.queue.contains(&fetch) {
- return Err(QueueError::Duplicate(fetch));
- }
- self.queue.push_back(fetch);
-
- Ok(())
- }
-
- pub fn dequeue_fetch(&mut self) -> Option<QueuedFetch> {
- self.queue.pop_front()
- }
-
pub fn attempts(&self) -> usize {
self.attempts
}
@@ -279,33 +227,6 @@ impl Session {
}
}
- /// Mark this session as fetching the given RID.
- ///
- /// # Panics
- ///
- /// If it is already fetching that RID, or the session is disconnected.
- pub fn fetching(&mut self, rid: RepoId) {
- if let State::Connected { fetching, .. } = &mut self.state {
- assert!(
- fetching.insert(rid),
- "Session must not already be fetching {rid}"
- );
- } else {
- panic!(
- "Attempting to fetch {rid} from disconnected session {}",
- self.id
- );
- }
- }
-
- pub fn fetched(&mut self, rid: RepoId) {
- if let State::Connected { fetching, .. } = &mut self.state {
- if !fetching.remove(&rid) {
- log::debug!(target: "service", "Fetched unknown repository {rid}");
- }
- }
- }
-
pub fn to_attempted(&mut self) {
assert!(
self.is_initial(),
@@ -324,7 +245,6 @@ impl Session {
self.state = State::Connected {
since,
ping: PingState::default(),
- fetching: HashSet::default(),
latencies: VecDeque::default(),
stable: false,
};
diff --git a/crates/radicle-schemars/src/main.rs b/crates/radicle-schemars/src/main.rs
index cb0e4c87d..511e3c52a 100644
--- a/crates/radicle-schemars/src/main.rs
+++ b/crates/radicle-schemars/src/main.rs
@@ -87,7 +87,7 @@ fn print_schema() -> io::Result<()> {
#[schemars(with = "radicle::schemars_ext::crypto::PublicKey")]
radicle::node::NodeId,
),
- Config(radicle::node::Config),
+ Config(Box<radicle::node::Config>),
ListenAddrs(ListenAddrs),
ConnectResult(radicle::node::ConnectResult),
Success(radicle::node::Success),
diff --git a/crates/radicle/CHANGELOG.md b/crates/radicle/CHANGELOG.md
index cf7590e6f..856b1c9ed 100644
--- a/crates/radicle/CHANGELOG.md
+++ b/crates/radicle/CHANGELOG.md
@@ -46,6 +46,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Removed
+- The data returned by `Seeds` contains `state`, which in turn contained the
+ field `fetching` for ongoing fetches of that node, if in the `Connected`
+ state. `Connected` no longer contains that field.
+
### Security
## 0.20.0
diff --git a/crates/radicle/src/node.rs b/crates/radicle/src/node.rs
index 8f761fb5b..a4291f6d3 100644
--- a/crates/radicle/src/node.rs
+++ b/crates/radicle/src/node.rs
@@ -107,8 +107,6 @@ pub enum State {
/// Ping state.
#[serde(skip)]
ping: PingState,
- /// Ongoing fetches.
- fetching: HashSet<RepoId>,
/// Measured latencies for this peer.
#[serde(skip)]
latencies: VecDeque<LocalDuration>,
@@ -696,7 +694,7 @@ impl From<Vec<Seed>> for Seeds {
}
}
-#[derive(Clone, Debug, Serialize, Deserialize)]
+#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq)]
#[serde(tag = "status", rename_all = "camelCase")]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
pub enum FetchResult {
@@ -1486,7 +1484,6 @@ mod test {
&serde_json::to_string(&CommandResult::Okay(State::Connected {
since: LocalTime::now(),
ping: Default::default(),
- fetching: Default::default(),
latencies: VecDeque::default(),
stable: false,
}))
diff --git a/crates/radicle/src/node/command.rs b/crates/radicle/src/node/command.rs
index 7a32c75c6..1999626bc 100644
--- a/crates/radicle/src/node/command.rs
+++ b/crates/radicle/src/node/command.rs
@@ -313,7 +313,6 @@ mod test {
&serde_json::to_string(&CommandResult::Okay(State::Connected {
since: LocalTime::now(),
ping: Default::default(),
- fetching: Default::default(),
latencies: VecDeque::default(),
stable: false,
}))
@@ -329,7 +328,7 @@ mod test {
);
assert_matches!(
json::from_str::<CommandResult<Seeds>>(
- r#"[{"nid":"z6MksmpU5b1dS7oaqF2bHXhQi1DWy2hB7Mh9CuN7y1DN6QSz","addrs":[{"addr":"seed.radicle.example.com:8776","source":"peer","lastSuccess":1699983994234,"lastAttempt":1699983994000,"banned":false}],"state":{"connected":{"since":1699983994,"fetching":[]}}}]"#
+ r#"[{"nid":"z6MksmpU5b1dS7oaqF2bHXhQi1DWy2hB7Mh9CuN7y1DN6QSz","addrs":[{"addr":"seed.radicle.example.com:8776","source":"peer","lastSuccess":1699983994234,"lastAttempt":1699983994000,"banned":false}],"state":{"connected":{"since":1699983994}}}]"#
),
Ok(CommandResult::Okay(_))
);
commit 2663fe96ae621d79af42bc870c417706ccfcd63d
Author: Fintan Halpenny <fintan.halpenny@gmail.com>
Date: Fri Aug 8 13:13:04 2025 +0100
protocol: Introduce `FetcherService`
The `FetcherService` wraps the `FetcherState` alongside keeping track
of subscribers for a given fetch.
Typically, this is a channel for which the fetch result should be sent
to.
These subscribers are coalesced by maintaing a map from `(rid, node)`
pairs to the subscriber.
This continues the sans-IO approach where the type of subscribers are
not specified, but rather are kept as a type parameter – to be
specified by the caller.
diff --git a/crates/radicle-protocol/src/fetcher.rs b/crates/radicle-protocol/src/fetcher.rs
index 31ba9257f..00cbe20b8 100644
--- a/crates/radicle-protocol/src/fetcher.rs
+++ b/crates/radicle-protocol/src/fetcher.rs
@@ -1,5 +1,12 @@
+pub mod service;
+pub use service::FetcherService;
+
pub mod state;
pub use state::{ActiveFetch, Config, FetcherState, MaxQueueSize, Queue, QueueIter, QueuedFetch};
#[cfg(test)]
mod test;
+
+// TODO(finto): `Service::fetch_refs_at` and the use of `refs_status_of` is a
+// layer above the `Fetcher` where it would perform I/O, mocked out by a trait,
+// to check if there are wants and add a fetch to the Fetcher.
diff --git a/crates/radicle-protocol/src/fetcher/service.rs b/crates/radicle-protocol/src/fetcher/service.rs
new file mode 100644
index 000000000..6d211ad95
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/service.rs
@@ -0,0 +1,142 @@
+use std::collections::HashMap;
+
+use radicle_core::{NodeId, RepoId};
+
+use crate::fetcher::state::{command, event, Config, FetcherState, QueuedFetch};
+
+/// Service layer that wraps [`FetcherState`] and manages subscriber coalescing.
+///
+/// When multiple callers request the same fetch, their subscribers are collected
+/// and all notified when the fetch completes.
+///
+/// # Type Parameter
+/// - `S`: The subscriber type (e.g., `chan::Sender<FetchResult>`).
+#[derive(Debug)]
+pub struct FetcherService<S> {
+ state: FetcherState,
+ subscribers: HashMap<FetchKey, Vec<S>>,
+}
+
+impl<S> FetcherService<S> {
+ /// Initialize the [`FetcherService`] with the give [`Config`].
+ pub fn new(config: Config) -> Self {
+ Self {
+ state: FetcherState::new(config),
+ subscribers: HashMap::new(),
+ }
+ }
+
+ /// Provide a reference handle to the [`FetcherState`].
+ pub fn state(&self) -> &FetcherState {
+ &self.state
+ }
+}
+
+/// Key for pending subscribers.
+#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
+struct FetchKey {
+ rid: RepoId,
+ node: NodeId,
+}
+
+impl FetchKey {
+ fn new(rid: RepoId, node: NodeId) -> Self {
+ Self { rid, node }
+ }
+}
+
+/// The result of calling [`FetcherService::fetch`].
+#[must_use]
+#[derive(Debug)]
+pub struct FetchInitiated<S> {
+ /// The underlying result from calling [`FetcherState::fetch`].
+ pub event: event::Fetch,
+ /// Subscriber returned if fetch was rejected (queue at capacity).
+ pub rejected: Option<S>,
+}
+
+/// The result of calling [`FetcherService::fetched`].
+#[must_use]
+#[derive(Debug)]
+pub struct FetchCompleted<S> {
+ /// The underlying result from calling [`FetcherState::fetched`].
+ pub event: event::Fetched,
+ /// All the subscribers that were interested in this given fetch.
+ pub subscribers: Vec<S>,
+}
+
+/// The result of calling [`FetcherService::cancel`].
+#[must_use]
+#[derive(Debug)]
+pub struct FetchesCancelled<S> {
+ /// The underlying result from calling [`FetcherState::cancel`].
+ pub event: event::Cancel,
+ /// Orphaned subscribers paired with their [`RepoId`].
+ pub orphaned: Vec<(RepoId, S)>,
+}
+
+impl<S> FetcherService<S> {
+ /// Initiate a fetch, optionally registering a subscriber.
+ ///
+ /// Subscribers are coalesced: if the same `(rid, node)` is already being
+ /// fetched or queued, the subscriber joins the existing waiters.
+ ///
+ /// If the fetch could not be initiated, and also could not be queued, then
+ /// subscriber is returned to notify of the rejection.
+ ///
+ /// See [`FetcherState::fetch`].
+ pub fn fetch(&mut self, cmd: command::Fetch, subscriber: Option<S>) -> FetchInitiated<S> {
+ let key = FetchKey::new(cmd.rid, cmd.from);
+ let event = self.state.fetch(cmd);
+
+ let rejected = match &event {
+ event::Fetch::QueueAtCapacity { .. } => subscriber,
+ _ => {
+ if let Some(r) = subscriber {
+ self.subscribers.entry(key).or_default().push(r);
+ }
+ None
+ }
+ };
+
+ FetchInitiated { event, rejected }
+ }
+
+ /// Mark a fetch as completed and retrieve waiting subscribers.
+ ///
+ /// See [`FetcherState::fetched`].
+ pub fn fetched(&mut self, cmd: command::Fetched) -> FetchCompleted<S> {
+ let key = FetchKey::new(cmd.rid, cmd.from);
+ let event = self.state.fetched(cmd);
+ let subscribers = self.subscribers.remove(&key).unwrap_or_default();
+ FetchCompleted { event, subscribers }
+ }
+
+ /// Cancel all fetches for a disconnected peer, returning any orphaned
+ /// subscribers.
+ ///
+ /// See [`FetcherState::cancel`].
+ pub fn cancel(&mut self, cmd: command::Cancel) -> FetchesCancelled<S> {
+ let from = cmd.from;
+ let event = self.state.cancel(cmd);
+
+ let mut orphaned = Vec::new();
+ self.subscribers.retain(|key, subscribers| {
+ if key.node == from {
+ orphaned.extend(subscribers.drain(..).map(|r| (key.rid, r)));
+ false
+ } else {
+ true
+ }
+ });
+
+ FetchesCancelled { event, orphaned }
+ }
+
+ /// Dequeue the next fetch for a node.
+ ///
+ /// See [`FetcherState::dequeue`].
+ pub fn dequeue(&mut self, from: &NodeId) -> Option<QueuedFetch> {
+ self.state.dequeue(from)
+ }
+}
commit d57e6ac0bc7af8653d95839da988b9bf20df0586
Author: Fintan Halpenny <fintan.halpenny@gmail.com>
Date: Fri Aug 8 13:13:04 2025 +0100
protocol: Introduce `FetcherState`
This patch introduces the `FetcherState` which encapsulates the logic
and state for keeping track of fetches.
It uses a sans-IO approach, where the state of fetches transition
based on the current state and provided input.
Callers can then decide to perform I/O based on the returned event.
diff --git a/Cargo.lock b/Cargo.lock
index ebe36335f..1adcab8e1 100644
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -3098,6 +3098,7 @@ dependencies = [
"qcheck",
"qcheck-macros",
"radicle",
+ "radicle-core",
"radicle-crypto",
"radicle-fetch",
"radicle-localtime",
diff --git a/crates/radicle-protocol/Cargo.toml b/crates/radicle-protocol/Cargo.toml
index 40cc183cb..561a7b775 100644
--- a/crates/radicle-protocol/Cargo.toml
+++ b/crates/radicle-protocol/Cargo.toml
@@ -21,6 +21,7 @@ log = { workspace = true, features = ["std"] }
nonempty = { workspace = true, features = ["serialize"] }
qcheck = { workspace = true, optional = true }
radicle = { workspace = true, features = ["logger"] }
+radicle-core = { workspace = true }
radicle-fetch = { workspace = true }
radicle-localtime = { workspace = true }
sqlite = { workspace = true, features = ["bundled"] }
diff --git a/crates/radicle-protocol/src/fetcher.rs b/crates/radicle-protocol/src/fetcher.rs
new file mode 100644
index 000000000..31ba9257f
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher.rs
@@ -0,0 +1,5 @@
+pub mod state;
+pub use state::{ActiveFetch, Config, FetcherState, MaxQueueSize, Queue, QueueIter, QueuedFetch};
+
+#[cfg(test)]
+mod test;
diff --git a/crates/radicle-protocol/src/fetcher/state.rs b/crates/radicle-protocol/src/fetcher/state.rs
new file mode 100644
index 000000000..6f64ff6db
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/state.rs
@@ -0,0 +1,454 @@
+//! Logical state for Git fetches happening in the node.
+//!
+//! See [`FetcherState`] for more information.
+//!
+//! See [`command`]'s for input into [`FetcherState`].
+//! See [`event`]'s for output from [`FetcherState`].
+
+pub mod command;
+pub mod event;
+
+pub use command::Command;
+pub use event::Event;
+
+use std::collections::{BTreeMap, VecDeque};
+use std::num::NonZeroUsize;
+use std::time;
+
+use radicle::storage::refs::RefsAt;
+use radicle_core::{NodeId, RepoId};
+
+/// Default for the maximum items per fetch queue.
+pub const MAX_FETCH_QUEUE_SIZE: usize = 128;
+/// Default for maximum concurrency per node.
+pub const MAX_CONCURRENCY: NonZeroUsize = NonZeroUsize::MIN;
+
+/// Logical state for Git fetches happening in the node.
+///
+/// A fetch can either be:
+/// - [`ActiveFetch`]: meaning it is currently being fetched from another node on the network
+/// - [`QueuedFetch`]: meaning it is expected to be fetched from a given node, but the
+/// repository is already being fetched, or the node is at capacity.
+///
+/// For any given repository, identified by its [`RepoId`], there can only be
+/// one fetch occurring for it at a given time. This prevents any concurrent
+/// fetches from clobbering overlapping references.
+///
+/// If the repository is actively being fetched, then that fetch will be queued
+/// for a later attempt.
+///
+/// For any given node, there is a configurable capacity so that only `N` number
+/// of fetches can happen with it concurrently. This does not guarantee that the
+/// node will actually allow this node to fetch from it – since it will maintain
+/// its own capacity for connections and load.
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub struct FetcherState {
+ /// The active fetches that are occurring, ensuring only one fetch per repository.
+ active: BTreeMap<RepoId, ActiveFetch>,
+ /// The queued fetches, waiting to happen, where each node maintains its own queue.
+ queues: BTreeMap<NodeId, Queue>,
+ /// Configuration for maintaining the fetch state.
+ config: Config,
+}
+
+impl Default for FetcherState {
+ fn default() -> Self {
+ Self::new(Config::default())
+ }
+}
+
+impl FetcherState {
+ /// Initialize the [`FetcherState`] with the given [`Config`].
+ pub fn new(config: Config) -> Self {
+ Self {
+ active: BTreeMap::new(),
+ queues: BTreeMap::new(),
+ config,
+ }
+ }
+}
+
+impl FetcherState {
+ /// Process the handling of a [`Command`], delegating to its corresponding
+ /// method, and returning the corresponding [`Event`].
+ ///
+ /// This method is useful if the [`FetcherState`] is used in batch
+ /// processing and does need to be explicit about the underlying method.
+ pub fn handle(&mut self, command: Command) -> Event {
+ match command {
+ Command::Fetch(fetch) => self.fetch(fetch).into(),
+ Command::Fetched(fetched) => self.fetched(fetched).into(),
+ Command::Cancel(cancel) => self.cancel(cancel).into(),
+ }
+ }
+
+ /// Process a [`Fetch`] command, which transitions the given fetch to
+ /// active, if possible.
+ ///
+ /// The fetch will only transition to being active if:
+ ///
+ /// - A fetch is not already happening for that repository, in which case it gets queued.
+ /// - The node to be fetched from is not already at capacity, again it will be queued.
+ ///
+ /// [`Fetch`]: command::Fetch
+ pub fn fetch(
+ &mut self,
+ command::Fetch {
+ from,
+ rid,
+ refs_at,
+ timeout,
+ }: command::Fetch,
+ ) -> event::Fetch {
+ if let Some(active) = self.active.get(&rid) {
+ if active.refs_at == refs_at && active.from == from {
+ return event::Fetch::AlreadyFetching { rid, from };
+ } else {
+ return self.enqueue(rid, from, refs_at, timeout);
+ }
+ }
+
+ if self.is_at_node_capacity(&from) {
+ self.enqueue(rid, from, refs_at, timeout)
+ } else {
+ self.active.insert(
+ rid,
+ ActiveFetch {
+ from,
+ refs_at: refs_at.clone(),
+ },
+ );
+ event::Fetch::Started {
+ rid,
+ from,
+ refs_at,
+ timeout,
+ }
+ }
+ }
+
+ /// Process a [`Fetched`] command, which removes the given fetch from the set of active fetches.
+ /// Note that this is agnostic of whether the fetch succeeded or failed.
+ ///
+ /// The caller will be notified if the completed fetch did not exist in the active set.
+ ///
+ /// [`Fetched`]: command::Fetched
+ pub fn fetched(&mut self, command::Fetched { from, rid }: command::Fetched) -> event::Fetched {
+ match self.active.remove(&rid) {
+ None => event::Fetched::NotFound { from, rid },
+ Some(ActiveFetch { from, refs_at }) => event::Fetched::Completed { from, rid, refs_at },
+ }
+ }
+
+ /// Attempt to dequeue a [`QueuedFetch`] for the given node.
+ ///
+ /// This will only dequeue the fetch if it is not active, and the given node
+ /// is not at capacity.
+ pub fn dequeue(&mut self, from: &NodeId) -> Option<QueuedFetch> {
+ let is_at_capacity = self.is_at_node_capacity(from);
+ let queue = self.queues.get_mut(from)?;
+ let active = &self.active;
+ queue.try_dequeue(|QueuedFetch { rid, .. }| !is_at_capacity && !active.contains_key(rid))
+ }
+
+ /// Process a [`Cancel`] command, which cancels any active and/or queued
+ /// fetches for that given node.
+ ///
+ /// [`Cancel`]: command::Cancel
+ pub fn cancel(&mut self, command::Cancel { from }: command::Cancel) -> event::Cancel {
+ let cancelled: Vec<_> = self
+ .active
+ .iter()
+ .filter_map(|(rid, f)| (f.from == from).then_some(*rid))
+ .collect();
+ let ongoing: BTreeMap<_, _> = cancelled
+ .iter()
+ .filter_map(|rid| self.active.remove(rid).map(|f| (*rid, f)))
+ .collect();
+ let ongoing = (!ongoing.is_empty()).then_some(ongoing);
+ let queued = self.queues.remove(&from).filter(|queue| !queue.is_empty());
+
+ match (ongoing, queued) {
+ (None, None) => event::Cancel::Unexpected { from },
+ (ongoing, queued) => event::Cancel::Canceled {
+ from,
+ active: ongoing.unwrap_or_default(),
+ queued: queued.map(|q| q.queue).unwrap_or_default(),
+ },
+ }
+ }
+
+ fn enqueue(
+ &mut self,
+ rid: RepoId,
+ from: NodeId,
+ refs_at: Vec<RefsAt>,
+ timeout: time::Duration,
+ ) -> event::Fetch {
+ let queue = self
+ .queues
+ .entry(from)
+ .or_insert(Queue::new(self.config.maximum_queue_size));
+ match queue.enqueue(QueuedFetch {
+ rid,
+ from,
+ refs_at,
+ timeout,
+ }) {
+ Enqueue::CapacityReached(QueuedFetch {
+ rid,
+ from,
+ refs_at,
+ timeout,
+ }) => event::Fetch::QueueAtCapacity {
+ rid,
+ from,
+ refs_at,
+ timeout,
+ capacity: queue.len(),
+ },
+ Enqueue::Queued => event::Fetch::Queued { rid, from },
+ Enqueue::Merged => event::Fetch::Queued { rid, from },
+ }
+ }
+}
+
+impl FetcherState {
+ /// Get the set of queued fetches.
+ pub fn queued_fetches(&self) -> &BTreeMap<NodeId, Queue> {
+ &self.queues
+ }
+
+ /// Get the set of active fetches.
+ pub fn active_fetches(&self) -> &BTreeMap<RepoId, ActiveFetch> {
+ &self.active
+ }
+
+ /// Get the [`ActiveFetch`] for the provided [`RepoId`], returning `None` if
+ /// it does not exist.
+ pub fn get_active_fetch(&self, rid: &RepoId) -> Option<&ActiveFetch> {
+ self.active.get(rid)
+ }
+
+ /// Check if the number of fetches exceeds the maximum number of concurrent
+ /// fetches for a given [`NodeId`].
+ ///
+ /// Returns `true` if the fetcher is fetching the maximum number of
+ /// repositories, for that node.
+ fn is_at_node_capacity(&self, node: &NodeId) -> bool {
+ let count = self.active.values().filter(|f| &f.from == node).count();
+ count >= self.config.maximum_concurrency.into()
+ }
+}
+
+/// Configuration for the [`FetcherState`].
+#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
+pub struct Config {
+ /// Maximum number of concurrent fetches per peer connection.
+ maximum_concurrency: NonZeroUsize,
+ /// Maximum fetching queue size for a single node.
+ maximum_queue_size: MaxQueueSize,
+}
+
+impl Config {
+ pub fn new() -> Self {
+ Self::default()
+ }
+
+ /// Maximum fetching queue size for a single node.
+ pub fn with_max_capacity(mut self, capacity: MaxQueueSize) -> Self {
+ self.maximum_queue_size = capacity;
+ self
+ }
+
+ /// Maximum number of concurrent fetches per peer connection.
+ pub fn with_max_concurrency(mut self, concurrency: NonZeroUsize) -> Self {
+ self.maximum_concurrency = concurrency;
+ self
+ }
+}
+
+impl Default for Config {
+ fn default() -> Self {
+ Self {
+ maximum_concurrency: MAX_CONCURRENCY,
+ maximum_queue_size: MaxQueueSize::default(),
+ }
+ }
+}
+
+/// An active fetch represents a repository being fetched by a particular node.
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub struct ActiveFetch {
+ pub(super) from: NodeId,
+ pub(super) refs_at: Vec<RefsAt>,
+}
+
+impl ActiveFetch {
+ /// The node from which the repository is being fetched.
+ pub fn from(&self) -> &NodeId {
+ &self.from
+ }
+
+ /// The set of references that fetch is being performed for.
+ pub fn refs_at(&self) -> &[RefsAt] {
+ &self.refs_at
+ }
+}
+
+/// A fetch that is waiting to be processed, in the fetch queue.
+#[derive(Debug, Clone, PartialEq, Eq, Hash)]
+pub struct QueuedFetch {
+ /// The repository that will be fetched.
+ pub rid: RepoId,
+ // TODO(finto): this might be redundant, since queues are per node
+ /// The peer from which the repository will be fetched from.
+ pub from: NodeId,
+ /// The references that the fetch is being performed for.
+ pub refs_at: Vec<RefsAt>,
+ /// The timeout given for the fetch request.
+ pub timeout: time::Duration,
+}
+
+/// A queue for keeping track of fetches.
+///
+/// It ensures that the queue contains unique items for fetching, and does not
+/// exceed the provided maximum capacity.
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub struct Queue {
+ queue: VecDeque<QueuedFetch>,
+ max_queue_size: MaxQueueSize,
+}
+
+/// The maximum number of fetches that can be queued for a single node.
+#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
+pub struct MaxQueueSize(usize);
+
+impl MaxQueueSize {
+ /// Minimum queue size is `1`.
+ pub const MIN: Self = MaxQueueSize(1);
+
+ /// Create a queue size, that must be larger than `0`.
+ pub fn new(size: NonZeroUsize) -> Self {
+ Self(size.into())
+ }
+
+ pub fn as_usize(&self) -> usize {
+ self.0
+ }
+
+ /// Checks if the `n` provided exceeds the maximum queue size.
+ fn is_exceeded_by(&self, n: usize) -> bool {
+ n >= self.0
+ }
+}
+
+impl Default for MaxQueueSize {
+ fn default() -> Self {
+ Self(MAX_FETCH_QUEUE_SIZE)
+ }
+}
+
+/// The result of [`Queue::enqueue`].
+#[must_use]
+#[derive(Debug, PartialEq, Eq)]
+pub(super) enum Enqueue {
+ /// The capacity of the queue has been exceeded, and the [`QueuedFetch`] is
+ /// returned.
+ CapacityReached(QueuedFetch),
+ /// The [`QueuedFetch`] was successfully queued.
+ Queued,
+ Merged,
+}
+
+impl Queue {
+ /// Create the [`Queue`] with the given [`MaxQueueSize`].
+ pub(super) fn new(max_queue_size: MaxQueueSize) -> Self {
+ Self {
+ queue: VecDeque::with_capacity(max_queue_size.0),
+ max_queue_size,
+ }
+ }
+
+ /// The current number of items in the queue.
+ pub(super) fn len(&self) -> usize {
+ self.queue.len()
+ }
+
+ /// Returns `true` if the [`Queue`] is empty.
+ pub(super) fn is_empty(&self) -> bool {
+ self.queue.is_empty()
+ }
+
+ /// Enqueues a fetch onto the back of the queue, and will only succeed if
+ /// the queue has not reached capacity and if the item is unique.
+ pub(super) fn enqueue(&mut self, fetch: QueuedFetch) -> Enqueue {
+ if let Some(existing) = self.queue.iter_mut().find(|qf| qf.rid == fetch.rid) {
+ if existing.refs_at.is_empty() || fetch.refs_at.is_empty() {
+ // We fetch everything
+ existing.refs_at = vec![]
+ } else {
+ existing.refs_at.extend(fetch.refs_at);
+ }
+ // Take the longer timeout (more generous)
+ existing.timeout = existing.timeout.max(fetch.timeout);
+ return Enqueue::Merged;
+ }
+
+ if self.max_queue_size.is_exceeded_by(self.queue.len()) {
+ Enqueue::CapacityReached(fetch)
+ } else {
+ self.queue.push_back(fetch);
+ Enqueue::Queued
+ }
+ }
+
+ /// Try to dequeue the next [`QueuedFetch`], but only if the `predicate`
+ /// holds, otherwise it will be pushed back to the front of the queue.
+ pub(super) fn try_dequeue<P>(&mut self, predicate: P) -> Option<QueuedFetch>
+ where
+ P: FnOnce(&QueuedFetch) -> bool,
+ {
+ let fetch = self.dequeue()?;
+ if predicate(&fetch) {
+ Some(fetch)
+ } else {
+ self.queue.push_front(fetch);
+ None
+ }
+ }
+
+ /// Dequeues a fetch from the front of the queue.
+ pub(super) fn dequeue(&mut self) -> Option<QueuedFetch> {
+ self.queue.pop_front()
+ }
+
+ /// Return an iterator over the queued fetches.
+ pub fn iter<'a>(&'a self) -> QueueIter<'a> {
+ QueueIter {
+ inner: self.queue.iter(),
+ }
+ }
+}
+
+/// Iterator of the [`QueuedFetch`]'s
+pub struct QueueIter<'a> {
+ inner: std::collections::vec_deque::Iter<'a, QueuedFetch>,
+}
+
+impl<'a> Iterator for QueueIter<'a> {
+ type Item = &'a QueuedFetch;
+
+ fn next(&mut self) -> Option<Self::Item> {
+ self.inner.next()
+ }
+}
+
+impl<'a> IntoIterator for &'a Queue {
+ type Item = &'a QueuedFetch;
+ type IntoIter = QueueIter<'a>;
+
+ fn into_iter(self) -> Self::IntoIter {
+ self.iter()
+ }
+}
diff --git a/crates/radicle-protocol/src/fetcher/state/command.rs b/crates/radicle-protocol/src/fetcher/state/command.rs
new file mode 100644
index 000000000..8fa8aeae8
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/state/command.rs
@@ -0,0 +1,80 @@
+use std::time;
+
+use radicle::storage::refs::RefsAt;
+use radicle_core::{NodeId, RepoId};
+
+/// Commands for transitioning the [`FetcherState`].
+///
+/// [`FetcherState`]: super::FetcherState
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub enum Command {
+ Fetch(Fetch),
+ Fetched(Fetched),
+ Cancel(Cancel),
+}
+
+impl From<Fetch> for Command {
+ fn from(v: Fetch) -> Self {
+ Self::Fetch(v)
+ }
+}
+
+impl From<Fetched> for Command {
+ fn from(v: Fetched) -> Self {
+ Self::Fetched(v)
+ }
+}
+
+impl From<Cancel> for Command {
+ fn from(v: Cancel) -> Self {
+ Self::Cancel(v)
+ }
+}
+
+impl Command {
+ pub fn fetch(from: NodeId, rid: RepoId, refs_at: Vec<RefsAt>, timeout: time::Duration) -> Self {
+ Self::from(Fetch {
+ from,
+ rid,
+ refs_at,
+ timeout,
+ })
+ }
+
+ pub fn fetched(from: NodeId, rid: RepoId) -> Self {
+ Self::from(Fetched { from, rid })
+ }
+
+ pub fn cancel(from: NodeId) -> Self {
+ Self::from(Cancel { from })
+ }
+}
+
+/// A fetch wants to be marked as active.
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub struct Fetch {
+ /// The node from which the repository is being fetched from.
+ pub from: NodeId,
+ /// The repository to fetch.
+ pub rid: RepoId,
+ /// The references to fetch.
+ pub refs_at: Vec<RefsAt>,
+ /// The timeout for the fetch process.
+ pub timeout: time::Duration,
+}
+
+/// A fetch wants to be marked as completed.
+#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
+pub struct Fetched {
+ /// The node from which the repository was fetched from.
+ pub from: NodeId,
+ /// The repository that was fetch.
+ pub rid: RepoId,
+}
+
+/// Any fetches are canceled for the given node.
+#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
+pub struct Cancel {
+ /// The node for which the fetches should be canceled.
+ pub from: NodeId,
+}
diff --git a/crates/radicle-protocol/src/fetcher/state/event.rs b/crates/radicle-protocol/src/fetcher/state/event.rs
new file mode 100644
index 000000000..0a2c60847
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/state/event.rs
@@ -0,0 +1,112 @@
+use std::collections::{BTreeMap, VecDeque};
+use std::time;
+
+use radicle::storage::refs::RefsAt;
+use radicle_core::{NodeId, RepoId};
+
+use super::{ActiveFetch, QueuedFetch};
+
+/// Event returned from [`FetchState::handle`].
+///
+/// [`FetchState::handle`]: FetchState::handle.
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub enum Event {
+ Fetch(Fetch),
+ Fetched(Fetched),
+ Cancel(Cancel),
+}
+
+impl From<Cancel> for Event {
+ fn from(v: Cancel) -> Self {
+ Self::Cancel(v)
+ }
+}
+
+impl From<Fetched> for Event {
+ fn from(v: Fetched) -> Self {
+ Self::Fetched(v)
+ }
+}
+
+impl From<Fetch> for Event {
+ fn from(v: Fetch) -> Self {
+ Self::Fetch(v)
+ }
+}
+
+/// Events that occur when a repository is requested to be fetched.
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub enum Fetch {
+ /// The fetch can be started by the caller.
+ Started {
+ /// The repository to be fetched.
+ rid: RepoId,
+ /// The node to fetch from.
+ from: NodeId,
+ /// The references to be fetched.
+ refs_at: Vec<RefsAt>,
+ /// The timeout for the fetch process.
+ timeout: time::Duration,
+ },
+ /// The repository is already being fetched from the given node.
+ AlreadyFetching {
+ /// The repository being actively fetched.
+ rid: RepoId,
+ /// The node being fetched from.
+ from: NodeId,
+ },
+ /// The queue for the given node is at capacity, and can no longer accept
+ /// any more fetch requests.
+ QueueAtCapacity {
+ /// The rejected repository.
+ rid: RepoId,
+ /// The node who's queue is at capacity.
+ from: NodeId,
+ /// The references expected to be fetched.
+ refs_at: Vec<RefsAt>,
+ /// The timeout for the fetch process.
+ timeout: time::Duration,
+ /// The capacity of the queue.
+ capacity: usize,
+ },
+ /// The fetch was queued for later processing.
+ Queued {
+ /// The repository to be fetched.
+ rid: RepoId,
+ /// The node to fetch from.
+ from: NodeId,
+ },
+}
+
+/// Events that occur after a repository has been fetched.
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub enum Fetched {
+ /// There was no ongoing fetch for the given [`NodeId`] and [`RepoId`].
+ NotFound { from: NodeId, rid: RepoId },
+ /// The active fetch was marked as completed and removed from the active
+ /// set.
+ Completed {
+ /// The node the repository was fetched from.
+ from: NodeId,
+ /// The repository that was fetched.
+ rid: RepoId,
+ /// The references that were fetched.
+ refs_at: Vec<RefsAt>,
+ },
+}
+
+/// Events that occur when a fetch was canceled for a given node.
+#[derive(Clone, Debug, PartialEq, Eq)]
+pub enum Cancel {
+ /// There were no active or queued fetches for the given node.
+ Unexpected { from: NodeId },
+ /// The were active or queued fetches that were canceled for the given node.
+ Canceled {
+ /// The node which was canceled.
+ from: NodeId,
+ /// The active fetches that were canceled.
+ active: BTreeMap<RepoId, ActiveFetch>,
+ /// The queued fetched that were canceled.
+ queued: VecDeque<QueuedFetch>,
+ },
+}
diff --git a/crates/radicle-protocol/src/fetcher/test.rs b/crates/radicle-protocol/src/fetcher/test.rs
new file mode 100644
index 000000000..990936ad6
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test.rs
@@ -0,0 +1,2 @@
+mod queue;
+mod state;
diff --git a/crates/radicle-protocol/src/fetcher/test/arbitrary.rs b/crates/radicle-protocol/src/fetcher/test/arbitrary.rs
new file mode 100644
index 000000000..f41602036
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/arbitrary.rs
@@ -0,0 +1,52 @@
+use std::collections::HashSet;
+
+use radicle::{identity::DocAt, test::arbitrary};
+
+use crate::fetcher::{commands, Command, Fetched};
+
+impl qcheck::Arbitrary for Fetched {
+ fn arbitrary(g: &mut qcheck::Gen) -> Self {
+ Fetched {
+ updated: vec![],
+ namespaces: HashSet::arbitrary(g),
+ clone: bool::arbitrary(g),
+ doc: DocAt::arbitrary(g),
+ }
+ }
+}
+
+impl qcheck::Arbitrary for Command {
+ fn arbitrary(g: &mut qcheck::Gen) -> Self {
+ todo!()
+ }
+}
+
+impl qcheck::Arbitrary for commands::Fetch {
+ fn arbitrary(g: &mut qcheck::Gen) -> Self {
+ todo!()
+ }
+}
+
+impl qcheck::Arbitrary for commands::Fetched {
+ fn arbitrary(g: &mut qcheck::Gen) -> Self {
+ g.choose(&[
+ commands::Fetched::DequeueFetches,
+ commands::Fetched::Fetched {
+ from: arbitrary::gen(g.size()),
+ rid: arbitrary::gen(g.size()),
+ },
+ ])
+ .cloned()
+ .unwrap()
+ }
+}
+
+impl qcheck::Arbitrary for commands::Dequeue {
+ fn arbitrary(g: &mut qcheck::Gen) -> Self {
+ g.choose(&[commands::Dequeue::Nodes {
+ nodes: arbitrary::gen(5),
+ }])
+ .cloned()
+ .unwrap()
+ }
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/queue.rs b/crates/radicle-protocol/src/fetcher/test/queue.rs
new file mode 100644
index 000000000..eec89faee
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/queue.rs
@@ -0,0 +1,35 @@
+mod helpers;
+mod properties;
+mod unit;
+
+use std::num::NonZeroUsize;
+use std::time::Duration;
+
+use qcheck::Arbitrary;
+
+use radicle::storage::refs::RefsAt;
+use radicle_core::{NodeId, RepoId};
+
+use crate::fetcher::state::{MaxQueueSize, QueuedFetch};
+
+impl Arbitrary for QueuedFetch {
+ fn arbitrary(g: &mut qcheck::Gen) -> Self {
+ // Limit refs_at size to avoid slow shrinking
+ let refs_at_len = usize::arbitrary(g) % 4;
+ let refs_at: Vec<RefsAt> = (0..refs_at_len).map(|_| RefsAt::arbitrary(g)).collect();
+
+ QueuedFetch {
+ rid: RepoId::arbitrary(g),
+ from: NodeId::arbitrary(g),
+ refs_at,
+ timeout: Duration::from_secs(u64::arbitrary(g) % 3600),
+ }
+ }
+}
+
+impl Arbitrary for MaxQueueSize {
+ fn arbitrary(g: &mut qcheck::Gen) -> Self {
+ let size = NonZeroUsize::MIN.saturating_add(usize::arbitrary(g) % 255);
+ MaxQueueSize::new(size)
+ }
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/queue/helpers.rs b/crates/radicle-protocol/src/fetcher/test/queue/helpers.rs
new file mode 100644
index 000000000..701bc320d
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/queue/helpers.rs
@@ -0,0 +1,25 @@
+use std::{num::NonZeroUsize, time::Duration};
+
+use radicle::test::arbitrary;
+
+use crate::fetcher::{MaxQueueSize, Queue, QueuedFetch};
+
+pub fn create_queue(capacity: usize) -> Queue {
+ Queue::new(MaxQueueSize::new(
+ NonZeroUsize::new(capacity).expect("capacity must be non-zero"),
+ ))
+}
+
+pub fn create_fetch() -> QueuedFetch {
+ QueuedFetch {
+ rid: arbitrary::gen(1),
+ from: arbitrary::gen(1),
+ refs_at: vec![],
+ timeout: Duration::from_secs(30),
+ }
+}
+
+/// Generate a vector of unique QueuedFetch items (unique by rid)
+pub fn unique_fetches(count: usize) -> Vec<QueuedFetch> {
+ (0..count).map(|_| create_fetch()).collect()
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/queue/properties.rs b/crates/radicle-protocol/src/fetcher/test/queue/properties.rs
new file mode 100644
index 000000000..c0bd67ed6
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/queue/properties.rs
@@ -0,0 +1,5 @@
+mod capacity;
+mod dequeue;
+mod equality;
+mod fifo;
+mod merge;
diff --git a/crates/radicle-protocol/src/fetcher/test/queue/properties/capacity.rs b/crates/radicle-protocol/src/fetcher/test/queue/properties/capacity.rs
new file mode 100644
index 000000000..4ff9b1387
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/queue/properties/capacity.rs
@@ -0,0 +1,74 @@
+use qcheck_macros::quickcheck;
+
+use crate::fetcher::test::queue::helpers::*;
+use crate::fetcher::{state::Enqueue, MaxQueueSize};
+use crate::fetcher::{Queue, QueuedFetch};
+
+#[quickcheck]
+fn bounded(max_size: MaxQueueSize, num_enqueues: u8) -> bool {
+ let mut queue = Queue::new(max_size);
+
+ for _ in 0..num_enqueues {
+ let _ = queue.enqueue(create_fetch());
+
+ // Invariant: length never exceeds capacity
+ if queue.len() > max_size.as_usize() {
+ return false;
+ }
+ }
+ true
+}
+
+#[quickcheck]
+fn rejection(max_size: MaxQueueSize) -> bool {
+ let mut queue = Queue::new(max_size);
+
+ // Fill to capacity with unique items
+ let items = unique_fetches(max_size.as_usize());
+ for item in &items {
+ if queue.enqueue(item.clone()) != Enqueue::Queued {
+ return false;
+ }
+ }
+
+ // Next enqueue of a NEW item must be rejected
+ matches!(queue.enqueue(create_fetch()), Enqueue::CapacityReached(_))
+}
+
+#[quickcheck]
+fn restored_after_dequeue(max_size: MaxQueueSize, dequeue_count: u8) -> bool {
+ let mut queue = Queue::new(max_size);
+
+ // Fill to capacity
+ for _ in 0..max_size.as_usize() {
+ let _ = queue.enqueue(create_fetch());
+ }
+
+ // Dequeue some items
+ let to_dequeue = (dequeue_count as usize).min(max_size.as_usize());
+ for _ in 0..to_dequeue {
+ let _ = queue.dequeue();
+ }
+
+ // Should be able to enqueue exactly that many items again
+ for _ in 0..to_dequeue {
+ if queue.enqueue(create_fetch()) != Enqueue::Queued {
+ return false;
+ }
+ }
+
+ // Next enqueue should fail
+ matches!(queue.enqueue(create_fetch()), Enqueue::CapacityReached(_))
+}
+
+#[quickcheck]
+fn capacity_reached_returns_same_item(item: QueuedFetch) -> bool {
+ let mut queue = create_queue(1);
+ let _ = queue.enqueue(create_fetch()); // Fill the queue
+
+ match queue.enqueue(item.clone()) {
+ Enqueue::CapacityReached(returned) => returned == item,
+ Enqueue::Merged => true, // If same rid, merge takes precedence
+ _ => false,
+ }
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/queue/properties/dequeue.rs b/crates/radicle-protocol/src/fetcher/test/queue/properties/dequeue.rs
new file mode 100644
index 000000000..50bc08f3b
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/queue/properties/dequeue.rs
@@ -0,0 +1,56 @@
+use qcheck_macros::quickcheck;
+
+use crate::fetcher::state::Enqueue;
+use crate::fetcher::test::queue::helpers::*;
+use crate::fetcher::{MaxQueueSize, Queue};
+
+#[quickcheck]
+fn enables_reenqueue(count: u8) -> bool {
+ let count = ((count as usize) % 20).max(1);
+ let items = unique_fetches(count);
+
+ let mut queue = create_queue(count); // Exact capacity
+
+ for item in &items {
+ let _ = queue.enqueue(item.clone());
+ }
+
+ // Queue is full, dequeue first item
+ let dequeued = queue.dequeue();
+ if dequeued.is_none() {
+ return false;
+ }
+
+ // Should be able to enqueue a new item now
+ queue.enqueue(create_fetch()) == Enqueue::Queued
+}
+
+#[quickcheck]
+fn empty_queue_returns_none(max_size: MaxQueueSize, dequeue_attempts: u8) -> bool {
+ let mut queue = Queue::new(max_size);
+
+ // Multiple dequeues from empty queue should all return None
+ for _ in 0..dequeue_attempts {
+ if queue.dequeue().is_some() {
+ return false;
+ }
+ }
+ true
+}
+
+#[quickcheck]
+fn drained_queue_returns_none(max_size: MaxQueueSize, fill_count: u8) -> bool {
+ let mut queue = Queue::new(max_size);
+ let fill = (fill_count as usize).min(max_size.as_usize());
+
+ // Fill then drain
+ for _ in 0..fill {
+ let _ = queue.enqueue(create_fetch());
+ }
+ for _ in 0..fill {
+ let _ = queue.dequeue();
+ }
+
+ // Should return None now
+ queue.dequeue().is_none()
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/queue/properties/equality.rs b/crates/radicle-protocol/src/fetcher/test/queue/properties/equality.rs
new file mode 100644
index 000000000..44f199802
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/queue/properties/equality.rs
@@ -0,0 +1,22 @@
+use qcheck_macros::quickcheck;
+
+use crate::fetcher::QueuedFetch;
+
+#[quickcheck]
+fn reflexive(item: QueuedFetch) -> bool {
+ item == item.clone()
+}
+
+#[quickcheck]
+fn symmetric(a: QueuedFetch, b: QueuedFetch) -> bool {
+ (a == b) == (b == a)
+}
+
+#[quickcheck]
+fn transitive(a: QueuedFetch, b: QueuedFetch, c: QueuedFetch) -> bool {
+ if a == b && b == c {
+ a == c
+ } else {
+ true
+ }
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/queue/properties/fifo.rs b/crates/radicle-protocol/src/fetcher/test/queue/properties/fifo.rs
new file mode 100644
index 000000000..7742241a8
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/queue/properties/fifo.rs
@@ -0,0 +1,75 @@
+use qcheck_macros::quickcheck;
+
+use crate::fetcher::state::Enqueue;
+use crate::fetcher::test::queue::helpers::*;
+use crate::fetcher::QueuedFetch;
+
+#[quickcheck]
+fn ordering(count: u8) -> bool {
+ let count = (count as usize) % 50; // Reasonable upper bound
+ if count == 0 {
+ return true;
+ }
+
+ let items = unique_fetches(count);
+ let mut queue = create_queue(count);
+
+ // Enqueue all items
+ for item in &items {
+ if queue.enqueue(item.clone()) != Enqueue::Queued {
+ return false;
+ }
+ }
+
+ // Dequeue and verify order
+ for expected in items {
+ match queue.dequeue() {
+ Some(actual) if actual.rid == expected.rid => continue,
+ _ => return false,
+ }
+ }
+
+ queue.is_empty()
+}
+
+#[quickcheck]
+fn interleaved_operations(ops: Vec<bool>) -> bool {
+ // Limit operations to avoid slow tests
+ let ops: Vec<_> = ops.into_iter().take(100).collect();
+ let capacity = ops.len().max(1);
+
+ let mut queue = create_queue(capacity);
+ let mut expected_order: Vec<QueuedFetch> = Vec::new();
+ let mut dequeue_index = 0;
+
+ for op in ops {
+ if op {
+ // Enqueue
+ let item = create_fetch();
+ match queue.enqueue(item.clone()) {
+ Enqueue::Queued => expected_order.push(item),
+ Enqueue::CapacityReached(_) => {} // Expected when full
+ Enqueue::Merged => {} // Can happen if same rid generated
+ }
+ } else {
+ // Dequeue
+ match queue.dequeue() {
+ Some(item) => {
+ if dequeue_index >= expected_order.len()
+ || item.rid != expected_order[dequeue_index].rid
+ {
+ return false;
+ }
+ dequeue_index += 1;
+ }
+ None => {
+ // Should only happen if we've dequeued everything we enqueued
+ if dequeue_index != expected_order.len() {
+ return false;
+ }
+ }
+ }
+ }
+ }
+ true
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/queue/properties/merge.rs b/crates/radicle-protocol/src/fetcher/test/queue/properties/merge.rs
new file mode 100644
index 000000000..cdb426fb9
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/queue/properties/merge.rs
@@ -0,0 +1,221 @@
+use std::time::Duration;
+
+use qcheck_macros::quickcheck;
+use radicle::storage::refs::RefsAt;
+use radicle::test::arbitrary;
+use radicle_core::RepoId;
+
+use crate::fetcher::state::Enqueue;
+use crate::fetcher::test::queue::helpers::*;
+use crate::fetcher::{MaxQueueSize, Queue, QueuedFetch};
+
+#[quickcheck]
+fn same_rid_merges_anywhere_in_queue(max_size: MaxQueueSize, merge_index: usize) -> bool {
+ if max_size.as_usize() < 2 {
+ return true; // Need at least 2 slots to test properly
+ }
+
+ let mut queue = Queue::new(max_size);
+ let items = unique_fetches(max_size.as_usize() - 1); // Leave room for potential new item
+
+ for item in &items {
+ let _ = queue.enqueue(item.clone());
+ }
+
+ if items.is_empty() {
+ return true;
+ }
+
+ // Try to enqueue an item with same rid as one already in queue
+ let target_index = merge_index % items.len();
+ let same_rid_item = QueuedFetch {
+ rid: items[target_index].rid,
+ from: arbitrary::gen(1), // Different from
+ refs_at: vec![arbitrary::gen(1)],
+ timeout: Duration::from_secs(60),
+ };
+
+ matches!(queue.enqueue(same_rid_item), Enqueue::Merged)
+}
+
+#[quickcheck]
+fn combines_refs(base_refs_count: u8, merge_refs_count: u8) -> bool {
+ let base_refs_count = (base_refs_count as usize) % 5;
+ let merge_refs_count = (merge_refs_count as usize) % 5;
+
+ let mut queue = create_queue(10);
+
+ let rid: RepoId = arbitrary::gen(1);
+ let base_refs: Vec<RefsAt> = (0..base_refs_count).map(|_| arbitrary::gen(1)).collect();
+ let merge_refs: Vec<RefsAt> = (0..merge_refs_count).map(|_| arbitrary::gen(1)).collect();
+
+ let base_item = QueuedFetch {
+ rid,
+ from: arbitrary::gen(1),
+ refs_at: base_refs.clone(),
+ timeout: Duration::from_secs(30),
+ };
+
+ let merge_item = QueuedFetch {
+ rid,
+ from: arbitrary::gen(1),
+ refs_at: merge_refs.clone(),
+ timeout: Duration::from_secs(30),
+ };
+
+ let _ = queue.enqueue(base_item);
+ let result = queue.enqueue(merge_item);
+
+ if result != Enqueue::Merged {
+ return false;
+ }
+
+ let dequeued = queue.dequeue().unwrap();
+
+ // If either was empty, result should be empty (fetch everything)
+ if base_refs.is_empty() || merge_refs.is_empty() {
+ dequeued.refs_at.is_empty()
+ } else {
+ // Otherwise refs should be combined
+ dequeued.refs_at.len() == base_refs_count + merge_refs_count
+ }
+}
+
+#[quickcheck]
+fn empty_refs_fetches_all() -> bool {
+ let mut queue = create_queue(10);
+ let rid: RepoId = arbitrary::gen(1);
+
+ // First enqueue with specific refs
+ let item_with_refs = QueuedFetch {
+ rid,
+ from: arbitrary::gen(1),
+ refs_at: vec![arbitrary::gen(1), arbitrary::gen(1)],
+ timeout: Duration::from_secs(30),
+ };
+
+ // Second enqueue with empty refs (fetch everything)
+ let item_empty_refs = QueuedFetch {
+ rid,
+ from: arbitrary::gen(1),
+ refs_at: vec![],
+ timeout: Duration::from_secs(30),
+ };
+
+ let _ = queue.enqueue(item_with_refs);
+ let _ = queue.enqueue(item_empty_refs);
+
+ let dequeued = queue.dequeue().unwrap();
+ dequeued.refs_at.is_empty() // Should fetch everything
+}
+
+#[quickcheck]
+fn longer_timeout_preserved(short_secs: u16, long_secs: u16) -> bool {
+ let short = Duration::from_secs(short_secs.min(long_secs) as u64);
+ let long = Duration::from_secs(short_secs.max(long_secs) as u64);
+
+ let mut queue = create_queue(10);
+ let rid: RepoId = arbitrary::gen(1);
+
+ let item_short = QueuedFetch {
+ rid,
+ from: arbitrary::gen(1),
+ refs_at: vec![],
+ timeout: short,
+ };
+
+ let item_long = QueuedFetch {
+ rid,
+ from: arbitrary::gen(1),
+ refs_at: vec![],
+ timeout: long,
+ };
+
+ // Test both orderings
+ let _ = queue.enqueue(item_short.clone());
+ let _ = queue.enqueue(item_long.clone());
+ let dequeued1 = queue.dequeue().unwrap();
+
+ let mut queue2 = create_queue(10);
+ let _ = queue2.enqueue(item_long);
+ let _ = queue2.enqueue(item_short);
+ let dequeued2 = queue2.dequeue().unwrap();
+
+ dequeued1.timeout == long && dequeued2.timeout == long
+}
+
+#[quickcheck]
+fn does_not_increase_queue_length() -> bool {
+ let mut queue = create_queue(10);
+ let rid: RepoId = arbitrary::gen(1);
+
+ let item1 = QueuedFetch {
+ rid,
+ from: arbitrary::gen(1),
+ refs_at: vec![arbitrary::gen(1)],
+ timeout: Duration::from_secs(30),
+ };
+
+ let item2 = QueuedFetch {
+ rid,
+ from: arbitrary::gen(1),
+ refs_at: vec![arbitrary::gen(1)],
+ timeout: Duration::from_secs(60),
+ };
+
+ let _ = queue.enqueue(item1);
+ let len_after_first = queue.len();
+
+ let _ = queue.enqueue(item2);
+ let len_after_merge = queue.len();
+
+ len_after_first == 1 && len_after_merge == 1
+}
+
+#[quickcheck]
+fn different_rid_accepted(base_item: QueuedFetch) -> bool {
+ let mut queue = create_queue(10);
+ let _ = queue.enqueue(base_item.clone());
+
+ // Item with different rid should be queued (not merged)
+ let different_rid = QueuedFetch {
+ rid: arbitrary::gen(1),
+ ..base_item
+ };
+
+ queue.enqueue(different_rid) == Enqueue::Queued
+}
+
+#[quickcheck]
+fn succeed_when_at_capacity() -> bool {
+ // When queue is at capacity, merging with existing item should still work
+ let mut queue = create_queue(2);
+ let rid: RepoId = arbitrary::gen(1);
+
+ let item1 = QueuedFetch {
+ rid,
+ from: arbitrary::gen(1),
+ refs_at: vec![],
+ timeout: Duration::from_secs(30),
+ };
+
+ let item2 = QueuedFetch {
+ rid: arbitrary::gen(1), // Different rid
+ from: arbitrary::gen(1),
+ refs_at: vec![],
+ timeout: Duration::from_secs(30),
+ };
+
+ let merge_item = QueuedFetch {
+ rid, // Same as item1
+ from: arbitrary::gen(1),
+ refs_at: vec![arbitrary::gen(1)],
+ timeout: Duration::from_secs(60),
+ };
+
+ let _ = queue.enqueue(item1);
+ let _ = queue.enqueue(item2);
+
+ // Queue is now at capacity, but merge should still work
+ queue.enqueue(merge_item) == Enqueue::Merged
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/queue/unit.rs b/crates/radicle-protocol/src/fetcher/test/queue/unit.rs
new file mode 100644
index 000000000..e0065425d
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/queue/unit.rs
@@ -0,0 +1,113 @@
+use std::time::Duration;
+
+use radicle::test::arbitrary;
+use radicle_core::{NodeId, RepoId};
+
+use crate::fetcher::state::Enqueue;
+use crate::fetcher::test::queue::helpers::*;
+use crate::fetcher::QueuedFetch;
+
+#[test]
+fn zero_timeout_accepted() {
+ let mut queue = create_queue(10);
+ let item = QueuedFetch {
+ rid: arbitrary::gen(1),
+ from: arbitrary::gen(1),
+ refs_at: vec![],
+ timeout: Duration::ZERO,
+ };
+ assert_eq!(queue.enqueue(item), Enqueue::Queued);
+}
+
+#[test]
+fn max_timeout_accepted() {
+ let mut queue = create_queue(10);
+ let item = QueuedFetch {
+ rid: arbitrary::gen(1),
+ from: arbitrary::gen(1),
+ refs_at: vec![],
+ timeout: Duration::MAX,
+ };
+ assert_eq!(queue.enqueue(item), Enqueue::Queued);
+}
+
+#[test]
+fn empty_refs_at_items_can_be_equal() {
+ let rid: RepoId = arbitrary::gen(1);
+ let from: NodeId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ let item1 = QueuedFetch {
+ rid,
+ from,
+ refs_at: vec![],
+ timeout,
+ };
+ let item2 = QueuedFetch {
+ rid,
+ from,
+ refs_at: vec![],
+ timeout,
+ };
+
+ assert_eq!(item1, item2);
+}
+
+#[test]
+fn merge_preserves_position_in_queue() {
+ let mut queue = create_queue(10);
+
+ let rid_first: RepoId = arbitrary::gen(1);
+ let rid_second: RepoId = arbitrary::gen(2);
+ let rid_third: RepoId = arbitrary::gen(3);
+
+ // Enqueue three items
+ let _ = queue.enqueue(QueuedFetch {
+ rid: rid_first,
+ from: arbitrary::gen(1),
+ refs_at: vec![],
+ timeout: Duration::from_secs(30),
+ });
+ let _ = queue.enqueue(QueuedFetch {
+ rid: rid_second,
+ from: arbitrary::gen(1),
+ refs_at: vec![],
+ timeout: Duration::from_secs(30),
+ });
+ let _ = queue.enqueue(QueuedFetch {
+ rid: rid_third,
+ from: arbitrary::gen(1),
+ refs_at: vec![],
+ timeout: Duration::from_secs(30),
+ });
+
+ // Merge into the second item
+ let result = queue.enqueue(QueuedFetch {
+ rid: rid_second,
+ from: arbitrary::gen(1),
+ refs_at: vec![arbitrary::gen(1)],
+ timeout: Duration::from_secs(60),
+ });
+ assert_eq!(result, Enqueue::Merged);
+
+ // Order should be preserved: first, second (merged), third
+ assert_eq!(queue.dequeue().unwrap().rid, rid_first);
+ assert_eq!(queue.dequeue().unwrap().rid, rid_second);
+ assert_eq!(queue.dequeue().unwrap().rid, rid_third);
+}
+
+#[test]
+fn capacity_takes_precedence_over_merge_for_new_items() {
+ let mut queue = create_queue(2);
+
+ // Fill to capacity with unique items
+ let _ = queue.enqueue(create_fetch());
+ let _ = queue.enqueue(create_fetch());
+
+ // New item (different rid) should be rejected
+ let new_item = create_fetch();
+ match queue.enqueue(new_item.clone()) {
+ Enqueue::CapacityReached(returned) => assert_eq!(returned, new_item),
+ _ => panic!("Expected CapacityReached"),
+ }
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/state.rs b/crates/radicle-protocol/src/fetcher/test/state.rs
new file mode 100644
index 000000000..423c15556
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/state.rs
@@ -0,0 +1,7 @@
+mod command;
+mod concurrent;
+mod config;
+mod dequeue;
+mod helpers;
+mod invariant;
+mod multinode;
diff --git a/crates/radicle-protocol/src/fetcher/test/state/command.rs b/crates/radicle-protocol/src/fetcher/test/state/command.rs
new file mode 100644
index 000000000..e5b2e3f8e
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/state/command.rs
@@ -0,0 +1,3 @@
+mod cancel;
+mod fetch;
+mod fetched;
diff --git a/crates/radicle-protocol/src/fetcher/test/state/command/cancel.rs b/crates/radicle-protocol/src/fetcher/test/state/command/cancel.rs
new file mode 100644
index 000000000..7fafba6b3
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/state/command/cancel.rs
@@ -0,0 +1,129 @@
+use std::time::Duration;
+
+use radicle::test::arbitrary;
+use radicle_core::{NodeId, RepoId};
+
+use crate::fetcher::state::{command, event};
+use crate::fetcher::test::state::helpers;
+use crate::fetcher::{ActiveFetch, FetcherState};
+
+#[test]
+fn single_ongoing() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let refs_at_1 = helpers::gen_refs_at(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: refs_at_1.clone(),
+ timeout,
+ });
+
+ let event = state.cancel(command::Cancel { from: node_a });
+
+ match event {
+ event::Cancel::Canceled {
+ from,
+ active: ongoing,
+ queued,
+ } => {
+ assert_eq!(from, node_a);
+ assert_eq!(ongoing.len(), 1);
+ assert_eq!(
+ ongoing.get(&repo_1),
+ Some(&ActiveFetch {
+ from: node_a,
+ refs_at: refs_at_1,
+ })
+ );
+ assert!(queued.is_empty());
+ }
+ _ => panic!("Expected Canceled event"),
+ }
+ assert!(state.get_active_fetch(&repo_1).is_none());
+}
+
+#[test]
+fn ongoing_and_queued() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let repo_3: RepoId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_3,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ let event = state.cancel(command::Cancel { from: node_a });
+
+ match event {
+ event::Cancel::Canceled {
+ active: ongoing,
+ queued,
+ ..
+ } => {
+ assert_eq!(ongoing.len(), 1);
+ assert!(ongoing.contains_key(&repo_1));
+ assert_eq!(queued.len(), 2);
+ }
+ _ => panic!("Expected Canceled event"),
+ }
+}
+
+#[test]
+fn non_existent_returns_unexpected() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_unknown: NodeId = arbitrary::gen(1);
+
+ let event = state.cancel(command::Cancel { from: node_unknown });
+
+ assert_eq!(event, event::Cancel::Unexpected { from: node_unknown });
+}
+
+#[test]
+fn cancellation_is_isolated() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let node_b: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_b,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ state.cancel(command::Cancel { from: node_a });
+
+ assert!(state.get_active_fetch(&repo_1).is_none());
+ assert!(state.get_active_fetch(&repo_2).is_some());
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/state/command/fetch.rs b/crates/radicle-protocol/src/fetcher/test/state/command/fetch.rs
new file mode 100644
index 000000000..303a3ec79
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/state/command/fetch.rs
@@ -0,0 +1,416 @@
+use std::time::Duration;
+
+use radicle::test::arbitrary;
+use radicle_core::{NodeId, RepoId};
+
+use crate::fetcher::state::{command, event};
+use crate::fetcher::test::state::helpers;
+use crate::fetcher::{ActiveFetch, FetcherState};
+
+#[test]
+fn fetch_start_first_fetch_for_node() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let refs_at_1 = helpers::gen_refs_at(2);
+ let timeout = Duration::from_secs(30);
+
+ let event = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: refs_at_1.clone(),
+ timeout,
+ });
+
+ assert_eq!(
+ event,
+ event::Fetch::Started {
+ rid: repo_1,
+ from: node_a,
+ refs_at: refs_at_1.clone(),
+ timeout,
+ }
+ );
+ assert_eq!(
+ state.get_active_fetch(&repo_1),
+ Some(&ActiveFetch {
+ from: node_a,
+ refs_at: refs_at_1,
+ })
+ );
+}
+
+#[test]
+fn fetch_different_repo_same_node_within_capacity() {
+ let mut state = FetcherState::new(helpers::config(2, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ let event1 = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ assert!(matches!(event1, event::Fetch::Started { .. }));
+
+ let event2 = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ assert!(matches!(event2, event::Fetch::Started { rid, .. } if rid == repo_2));
+ assert!(state.get_active_fetch(&repo_1).is_some());
+ assert!(state.get_active_fetch(&repo_2).is_some());
+}
+
+#[test]
+fn fetch_same_repo_different_nodes_queues_second() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let node_b: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let refs_at_1 = helpers::gen_refs_at(1);
+ let timeout = Duration::from_secs(30);
+
+ let event1 = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: refs_at_1.clone(),
+ timeout,
+ });
+ assert!(matches!(event1, event::Fetch::Started { .. }));
+
+ // Same repo from different node - gets queued since repo_1 is already active
+ let event2 = state.fetch(command::Fetch {
+ from: node_b,
+ rid: repo_1,
+ refs_at: refs_at_1.clone(),
+ timeout,
+ });
+
+ assert!(
+ matches!(event2, event::Fetch::Queued { rid, from } if rid == repo_1 && from == node_b)
+ );
+ // Only node_a's fetch is active
+ let active = state.get_active_fetch(&repo_1);
+ assert!(active.is_some());
+ assert_eq!(*active.unwrap().from(), node_a);
+}
+
+#[test]
+fn fetch_duplicate_returns_already_fetching() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let refs_at_1 = helpers::gen_refs_at(2);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: refs_at_1.clone(),
+ timeout,
+ });
+
+ let event = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: refs_at_1.clone(),
+ timeout,
+ });
+
+ assert_eq!(
+ event,
+ event::Fetch::AlreadyFetching {
+ rid: repo_1,
+ from: node_a,
+ }
+ );
+}
+
+#[test]
+fn fetch_same_repo_different_refs_enqueues() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let refs_at_1 = helpers::gen_refs_at(1);
+ let refs_at_2 = helpers::gen_refs_at(2);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: refs_at_1.clone(),
+ timeout,
+ });
+
+ let event = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: refs_at_2.clone(),
+ timeout,
+ });
+
+ assert_eq!(
+ event,
+ event::Fetch::Queued {
+ rid: repo_1,
+ from: node_a,
+ }
+ );
+}
+
+#[test]
+fn fetch_at_capacity_enqueues() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ let event = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ assert_eq!(
+ event,
+ event::Fetch::Queued {
+ rid: repo_2,
+ from: node_a,
+ }
+ );
+ assert!(state.get_active_fetch(&repo_1).is_some());
+ assert!(state.get_active_fetch(&repo_2).is_none());
+}
+
+#[test]
+fn fetch_queue_rejected_capacity_reached() {
+ let mut state = FetcherState::new(helpers::config(1, 2));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let repo_3: RepoId = arbitrary::gen(1);
+ let repo_4: RepoId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ // Fill concurrency
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ // Fill queue (capacity 2)
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_3,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ // Exceed queue capacity
+ let refs_at_4 = helpers::gen_refs_at(1);
+ let event = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_4,
+ refs_at: refs_at_4.clone(),
+ timeout,
+ });
+
+ assert_eq!(
+ event,
+ event::Fetch::QueueAtCapacity {
+ rid: repo_4,
+ from: node_a,
+ refs_at: refs_at_4,
+ timeout,
+ capacity: 2,
+ }
+ );
+}
+
+#[test]
+fn fetch_queue_merges_already_queued() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let refs_at_2a = helpers::gen_refs_at(1);
+ let refs_at_2b = helpers::gen_refs_at(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: refs_at_2a.clone(),
+ timeout,
+ });
+
+ // Second fetch for same queued repo - should merge refs
+ let event = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: refs_at_2b.clone(),
+ timeout,
+ });
+
+ // Returns Queued (merged)
+ assert_eq!(
+ event,
+ event::Fetch::Queued {
+ rid: repo_2,
+ from: node_a,
+ }
+ );
+
+ // Dequeue and verify refs were merged
+ state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+ let queued = state.dequeue(&node_a).unwrap();
+ assert_eq!(queued.rid, repo_2);
+ // refs_at should contain both sets of refs
+ assert_eq!(queued.refs_at.len(), refs_at_2a.len() + refs_at_2b.len());
+}
+
+#[test]
+fn fetch_queue_merge_empty_refs_fetches_all() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let refs_at_2 = helpers::gen_refs_at(2);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ // Queue with specific refs
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: refs_at_2.clone(),
+ timeout,
+ });
+
+ // Queue again with empty refs (fetch everything)
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: vec![],
+ timeout,
+ });
+
+ // Dequeue and verify refs became empty (fetch all)
+ state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+ let queued = state.dequeue(&node_a).unwrap();
+ assert_eq!(queued.rid, repo_2);
+ assert!(queued.refs_at.is_empty());
+}
+
+#[test]
+fn fetch_queue_merge_takes_longer_timeout() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let short_timeout = Duration::from_secs(10);
+ let long_timeout = Duration::from_secs(60);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout: short_timeout,
+ });
+
+ // Queue with short timeout
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout: short_timeout,
+ });
+
+ // Queue again with longer timeout
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout: long_timeout,
+ });
+
+ state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+ // Dequeue and verify timeout is the longer one
+ let queued = state.dequeue(&node_a).unwrap();
+ assert_eq!(queued.timeout, long_timeout);
+}
+
+#[test]
+fn fetch_after_previous_completed() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let refs_at_1 = helpers::gen_refs_at(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: refs_at_1.clone(),
+ timeout,
+ });
+ state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+
+ let event = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: refs_at_1.clone(),
+ timeout,
+ });
+
+ assert!(matches!(event, event::Fetch::Started { .. }));
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/state/command/fetched.rs b/crates/radicle-protocol/src/fetcher/test/state/command/fetched.rs
new file mode 100644
index 000000000..2c7564a34
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/state/command/fetched.rs
@@ -0,0 +1,145 @@
+use std::time::Duration;
+
+use radicle::test::arbitrary;
+use radicle_core::{NodeId, RepoId};
+
+use crate::fetcher::state::{command, event};
+use crate::fetcher::test::state::helpers;
+use crate::fetcher::FetcherState;
+
+#[test]
+fn complete_single_ongoing() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let refs_at_1 = helpers::gen_refs_at(2);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: refs_at_1.clone(),
+ timeout,
+ });
+
+ let event = state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+
+ assert_eq!(
+ event,
+ event::Fetched::Completed {
+ from: node_a,
+ rid: repo_1,
+ refs_at: refs_at_1,
+ }
+ );
+ assert!(state.get_active_fetch(&repo_1).is_none());
+}
+
+#[test]
+fn complete_then_dequeue_fifo() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let repo_3: RepoId = arbitrary::gen(1);
+ let refs_at_2 = helpers::gen_refs_at(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ // Queue repo_2 first, then repo_3
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: refs_at_2.clone(),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_3,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ let event = state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+
+ assert!(matches!(event, event::Fetched::Completed { .. }));
+
+ // Dequeue next - FIFO: repo_2 was queued first
+ let queued = state.dequeue(&node_a);
+ assert!(queued.is_some());
+ let queued = queued.unwrap();
+ assert_eq!(queued.rid, repo_2);
+ assert_eq!(queued.from, node_a);
+ assert_eq!(queued.refs_at, refs_at_2);
+}
+
+#[test]
+fn complete_one_of_multiple() {
+ let mut state = FetcherState::new(helpers::config(3, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let repo_3: RepoId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_3,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ let event = state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_2,
+ });
+
+ assert!(matches!(event, event::Fetched::Completed { rid, .. } if rid == repo_2));
+ assert!(state.get_active_fetch(&repo_1).is_some());
+ assert!(state.get_active_fetch(&repo_2).is_none());
+ assert!(state.get_active_fetch(&repo_3).is_some());
+}
+
+#[test]
+fn non_existent_returns_not_found() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+
+ let event = state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+
+ assert_eq!(
+ event,
+ event::Fetched::NotFound {
+ from: node_a,
+ rid: repo_1,
+ }
+ );
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/state/concurrent.rs b/crates/radicle-protocol/src/fetcher/test/state/concurrent.rs
new file mode 100644
index 000000000..d4b59bded
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/state/concurrent.rs
@@ -0,0 +1,106 @@
+use std::time::Duration;
+
+use radicle::test::arbitrary;
+use radicle_core::{NodeId, RepoId};
+
+use crate::fetcher::state::{command, event};
+use crate::fetcher::test::state::helpers;
+use crate::fetcher::FetcherState;
+
+#[test]
+fn interleaved_operations() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let node_b: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let repo_3: RepoId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ // fetch(A, r1)
+ let e1 = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ assert!(matches!(e1, event::Fetch::Started { .. }));
+
+ // fetch(B, r2)
+ let e2 = state.fetch(command::Fetch {
+ from: node_b,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ assert!(matches!(e2, event::Fetch::Started { .. }));
+
+ // fetched(A, r1)
+ let e3 = state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+ assert!(matches!(e3, event::Fetched::Completed { .. }));
+
+ // fetch(A, r3)
+ let e4 = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_3,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ assert!(matches!(e4, event::Fetch::Started { .. }));
+
+ // fetched(B, r2)
+ let e5 = state.fetched(command::Fetched {
+ from: node_b,
+ rid: repo_2,
+ });
+ assert!(matches!(e5, event::Fetched::Completed { .. }));
+
+ // Final state: only r3 from A ongoing
+ assert!(state.get_active_fetch(&repo_1).is_none());
+ assert!(state.get_active_fetch(&repo_2).is_none());
+ assert!(state.get_active_fetch(&repo_3).is_some());
+}
+
+#[test]
+fn fetched_then_cancel() {
+ let mut state = FetcherState::new(helpers::config(2, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ // Complete repo_1
+ let e1 = state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+ assert!(matches!(e1, event::Fetched::Completed { .. }));
+
+ // Cancel remaining
+ let e2 = state.cancel(command::Cancel { from: node_a });
+ match e2 {
+ event::Cancel::Canceled {
+ active: ongoing, ..
+ } => {
+ assert_eq!(ongoing.len(), 1);
+ assert!(ongoing.contains_key(&repo_2));
+ }
+ _ => panic!("Expected Canceled"),
+ }
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/state/config.rs b/crates/radicle-protocol/src/fetcher/test/state/config.rs
new file mode 100644
index 000000000..ce148eb20
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/state/config.rs
@@ -0,0 +1,72 @@
+use std::time::Duration;
+
+use radicle::test::arbitrary;
+use radicle_core::{NodeId, RepoId};
+
+use crate::fetcher::state::{command, event};
+use crate::fetcher::test::state::helpers;
+use crate::fetcher::FetcherState;
+
+#[test]
+fn high_concurrency() {
+ let mut state = FetcherState::new(helpers::config(100, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ for i in 0..100 {
+ let repo: RepoId = arbitrary::gen(i + 1);
+ let event = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ assert!(
+ matches!(event, event::Fetch::Started { .. }),
+ "Fetch {} should start",
+ i
+ );
+ }
+
+ assert_eq!(
+ state
+ .active_fetches()
+ .iter()
+ .filter(|(_, f)| *f.from() == node_a)
+ .count(),
+ 100
+ );
+}
+
+#[test]
+fn min_queue_size() {
+ let mut state = FetcherState::new(helpers::config(1, 1));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let repo_3: RepoId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ let event1 = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ assert!(matches!(event1, event::Fetch::Queued { .. }));
+
+ let event2 = state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_3,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ assert!(matches!(event2, event::Fetch::QueueAtCapacity { .. }));
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/state/dequeue.rs b/crates/radicle-protocol/src/fetcher/test/state/dequeue.rs
new file mode 100644
index 000000000..21b3892ad
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/state/dequeue.rs
@@ -0,0 +1,112 @@
+use std::time::Duration;
+
+use radicle::test::arbitrary;
+use radicle_core::{NodeId, RepoId};
+
+use crate::fetcher::state::command;
+use crate::fetcher::test::state::helpers;
+use crate::fetcher::FetcherState;
+
+#[test]
+fn cannot_dequeue_while_node_at_capacity() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let refs_at_2 = helpers::gen_refs_at(3);
+ let timeout_2 = Duration::from_secs(42);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout: Duration::from_secs(10),
+ });
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: refs_at_2.clone(),
+ timeout: timeout_2,
+ });
+
+ let result = state.dequeue(&node_a);
+ assert!(result.is_none());
+
+ state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+
+ let result = state.dequeue(&node_a);
+ let queued = result.unwrap();
+ assert_eq!(queued.rid, repo_2);
+ assert_eq!(queued.from, node_a);
+ assert_eq!(queued.refs_at, refs_at_2);
+ assert_eq!(queued.timeout, timeout_2);
+}
+
+#[test]
+fn maintains_fifo_order() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let repo_3: RepoId = arbitrary::gen(1);
+ let repo_4: RepoId = arbitrary::gen(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ // Queue in order: repo_2, repo_3, repo_4
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_3,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_4,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+ assert_eq!(state.dequeue(&node_a).unwrap().rid, repo_2);
+
+ state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_2,
+ });
+ assert_eq!(state.dequeue(&node_a).unwrap().rid, repo_3);
+
+ state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_3,
+ });
+ assert_eq!(state.dequeue(&node_a).unwrap().rid, repo_4);
+ assert!(state.dequeue(&node_a).is_none());
+}
+
+#[test]
+fn empty_queue_returns_none() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+
+ assert!(state.dequeue(&node_a).is_none());
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/state/helpers.rs b/crates/radicle-protocol/src/fetcher/test/state/helpers.rs
new file mode 100644
index 000000000..2200316d8
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/state/helpers.rs
@@ -0,0 +1,17 @@
+use std::num::NonZeroUsize;
+
+use radicle::{storage::refs::RefsAt, test::arbitrary};
+
+use crate::fetcher::{Config, MaxQueueSize};
+
+pub fn config(max_concurrency: usize, max_queue_size: usize) -> Config {
+ Config::new()
+ .with_max_concurrency(NonZeroUsize::new(max_concurrency).unwrap())
+ .with_max_capacity(MaxQueueSize::new(
+ NonZeroUsize::new(max_queue_size).unwrap(),
+ ))
+}
+
+pub fn gen_refs_at(count: usize) -> Vec<RefsAt> {
+ (0..count).map(|_| arbitrary::gen(1)).collect()
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/state/invariant.rs b/crates/radicle-protocol/src/fetcher/test/state/invariant.rs
new file mode 100644
index 000000000..f587d6120
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/state/invariant.rs
@@ -0,0 +1,53 @@
+use std::time::Duration;
+
+use radicle::test::arbitrary;
+use radicle_core::{NodeId, RepoId};
+
+use crate::fetcher::state::command;
+use crate::fetcher::test::state::helpers;
+use crate::fetcher::FetcherState;
+
+#[test]
+fn queue_integrity_after_merge() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let repo_1: RepoId = arbitrary::gen(1);
+ let repo_2: RepoId = arbitrary::gen(1);
+ let refs_at_2a = helpers::gen_refs_at(1);
+ let refs_at_2b = helpers::gen_refs_at(1);
+ let timeout = Duration::from_secs(30);
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_1,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: refs_at_2a.clone(),
+ timeout,
+ });
+
+ // Second fetch for same repo - should merge
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_2,
+ refs_at: refs_at_2b.clone(),
+ timeout,
+ });
+
+ // Queue should have exactly one repo_2 entry (merged)
+ state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_1,
+ });
+ let first = state.dequeue(&node_a);
+ assert!(first.is_some());
+ assert_eq!(first.unwrap().rid, repo_2);
+
+ let second = state.dequeue(&node_a);
+ assert!(second.is_none());
+}
diff --git a/crates/radicle-protocol/src/fetcher/test/state/multinode.rs b/crates/radicle-protocol/src/fetcher/test/state/multinode.rs
new file mode 100644
index 000000000..c8d2a2d27
--- /dev/null
+++ b/crates/radicle-protocol/src/fetcher/test/state/multinode.rs
@@ -0,0 +1,83 @@
+use std::time::Duration;
+
+use radicle::test::arbitrary;
+use radicle_core::{NodeId, RepoId};
+
+use crate::fetcher::state::{command, event};
+use crate::fetcher::test::state::helpers;
+use crate::fetcher::FetcherState;
+
+#[test]
+fn independent_queues() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let node_a: NodeId = arbitrary::gen(1);
+ let node_b: NodeId = arbitrary::gen(1);
+ let repo_a_active: RepoId = arbitrary::gen(1);
+ let repo_b_active: RepoId = arbitrary::gen(2);
+ let repo_a_queued: RepoId = arbitrary::gen(10);
+ let repo_b_queued: RepoId = arbitrary::gen(20);
+ let timeout = Duration::from_secs(30);
+
+ // Fill capacity for both nodes
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_a_active,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_b,
+ rid: repo_b_active,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ // Queue for both
+ state.fetch(command::Fetch {
+ from: node_a,
+ rid: repo_a_queued,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ state.fetch(command::Fetch {
+ from: node_b,
+ rid: repo_b_queued,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+
+ // Dequeue from A doesn't affect B
+ state.fetched(command::Fetched {
+ from: node_a,
+ rid: repo_a_active,
+ });
+ let a_item = state.dequeue(&node_a);
+ assert_eq!(a_item.unwrap().rid, repo_a_queued);
+
+ state.fetched(command::Fetched {
+ from: node_b,
+ rid: repo_b_active,
+ });
+ let b_item = state.dequeue(&node_b);
+ assert_eq!(b_item.unwrap().rid, repo_b_queued);
+}
+
+#[test]
+fn high_count() {
+ let mut state = FetcherState::new(helpers::config(1, 10));
+ let timeout = Duration::from_secs(30);
+
+ for i in 0..100 {
+ let node: NodeId = arbitrary::gen(i + 1);
+ let repo: RepoId = arbitrary::gen(i + 1);
+ let event = state.fetch(command::Fetch {
+ from: node,
+ rid: repo,
+ refs_at: helpers::gen_refs_at(1),
+ timeout,
+ });
+ assert!(matches!(event, event::Fetch::Started { .. }));
+ }
+
+ assert_eq!(state.active_fetches().len(), 100);
+}
diff --git a/crates/radicle-protocol/src/lib.rs b/crates/radicle-protocol/src/lib.rs
index 9619806b0..98fe9c0f0 100644
--- a/crates/radicle-protocol/src/lib.rs
+++ b/crates/radicle-protocol/src/lib.rs
@@ -1,5 +1,6 @@
pub mod bounded;
pub mod deserializer;
+pub mod fetcher;
pub mod service;
pub mod wire;
pub mod worker;
commit ae8d9a10ab224d11898af29d7e7436c9d3fae293
Author: Fintan Halpenny <fintan.halpenny@gmail.com>
Date: Sat Dec 13 09:52:27 2025 +0000
radicle(storage/refs): derive Hash for RefsAt
diff --git a/crates/radicle/src/storage/refs.rs b/crates/radicle/src/storage/refs.rs
index b6bbd3578..6ab4626ce 100644
--- a/crates/radicle/src/storage/refs.rs
+++ b/crates/radicle/src/storage/refs.rs
@@ -371,7 +371,7 @@ impl<V> Deref for SignedRefs<V> {
///
/// `RefsAt` can also be used for communicating announcements of updates
/// references to other nodes.
-#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
+#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
pub struct RefsAt {
Exit code: 0
shell: 'export RUSTDOCFLAGS=''-D warnings'' cargo --version rustc --version cargo fmt --check cargo clippy --all-targets --workspace -- --deny warnings cargo build --all-targets --workspace cargo doc --workspace --no-deps --all-features cargo test --workspace --no-fail-fast '
Commands:
$ podman run --name 4abdf646-ecbb-43f1-9982-11fb433438f7 -v /opt/radcis/ci.rad.levitte.org/cci/state/4abdf646-ecbb-43f1-9982-11fb433438f7/s:/4abdf646-ecbb-43f1-9982-11fb433438f7/s:ro -v /opt/radcis/ci.rad.levitte.org/cci/state/4abdf646-ecbb-43f1-9982-11fb433438f7/w:/4abdf646-ecbb-43f1-9982-11fb433438f7/w -w /4abdf646-ecbb-43f1-9982-11fb433438f7/w -v /opt/radcis/ci.rad.levitte.org/.radicle:/${id}/.radicle:ro -e RAD_HOME=/${id}/.radicle rust:trixie bash /4abdf646-ecbb-43f1-9982-11fb433438f7/s/script.sh
+ export 'RUSTDOCFLAGS=-D warnings'
+ RUSTDOCFLAGS='-D warnings'
+ cargo --version
info: syncing channel updates for '1.90-x86_64-unknown-linux-gnu'
info: latest update on 2025-09-18, rust version 1.90.0 (1159e78c4 2025-09-14)
info: downloading component 'cargo'
info: downloading component 'clippy'
info: downloading component 'rust-docs'
info: downloading component 'rust-src'
info: downloading component 'rust-std'
info: downloading component 'rustc'
info: downloading component 'rustfmt'
info: installing component 'cargo'
info: installing component 'clippy'
info: installing component 'rust-docs'
info: installing component 'rust-src'
info: installing component 'rust-std'
info: installing component 'rustc'
info: installing component 'rustfmt'
cargo 1.90.0 (840b83a10 2025-07-30)
+ rustc --version
rustc 1.90.0 (1159e78c4 2025-09-14)
+ cargo fmt --check
+ cargo clippy --all-targets --workspace -- --deny warnings
Updating crates.io index
Downloading crates ...
Downloaded adler2 v2.0.0
Downloaded amplify_derive v4.0.0
Downloaded base64 v0.21.7
Downloaded fxhash v0.2.1
Downloaded fraction v0.15.3
Downloaded git2 v0.19.0
Downloaded filetime v0.2.23
Downloaded crossbeam-utils v0.8.19
Downloaded document-features v0.2.11
Downloaded diff v0.1.13
Downloaded dunce v1.0.5
Downloaded cc v1.2.2
Downloaded ed25519 v1.5.3
Downloaded escargot v0.5.10
Downloaded equivalent v1.0.1
Downloaded chacha20poly1305 v0.10.1
Downloaded gix-prompt v0.11.1
Downloaded fast-glob v0.3.3
Downloaded gix-validate v0.10.0
Downloaded ecdsa v0.16.9
Downloaded iana-time-zone v0.1.60
Downloaded getrandom v0.2.15
Downloaded hash32 v0.3.1
Downloaded mio v0.8.11
Downloaded keccak v0.1.5
Downloaded qcheck-macros v1.0.0
Downloaded num-rational v0.4.2
Downloaded num-complex v0.4.6
Downloaded outref v0.5.2
Downloaded nonempty v0.9.0
Downloaded phf_shared v0.11.3
Downloaded overload v0.1.1
Downloaded gix-revision v0.35.0
Downloaded percent-encoding v2.3.1
Downloaded pkcs8 v0.10.2
Downloaded proc-macro-error2 v2.0.1
Downloaded radicle-git-ext v0.11.0
Downloaded ref-cast v1.0.24
Downloaded pkcs1 v0.7.5
Downloaded rfc6979 v0.4.0
Downloaded scrypt v0.11.0
Downloaded sha1 v0.10.6
Downloaded serde_spanned v1.0.0
Downloaded poly1305 v0.8.0
Downloaded scopeguard v1.2.0
Downloaded pretty_assertions v1.4.0
Downloaded sval_ref v2.14.1
Downloaded ryu v1.0.17
Downloaded ssh-encoding v0.2.0
Downloaded sqlite v0.32.0
Downloaded subtle v2.5.0
Downloaded maybe-async v0.2.10
Downloaded sval_json v2.14.1
Downloaded multibase v0.9.1
Downloaded sval_fmt v2.14.1
Downloaded test-log-macros v0.2.18
Downloaded p521 v0.13.3
Downloaded systemd-journal-logger v2.2.2
Downloaded spki v0.7.3
Downloaded tinyvec_macros v0.1.1
Downloaded tree-sitter-language v0.1.2
Downloaded icu_locid_transform v1.5.0
Downloaded tracing-log v0.2.0
Downloaded shell-words v1.1.0
Downloaded toml_datetime v0.7.0
Downloaded universal-hash v0.5.1
Downloaded unit-prefix v0.5.1
Downloaded unarray v0.1.4
Downloaded version_check v0.9.4
Downloaded sval_serde v2.14.1
Downloaded serde v1.0.219
Downloaded value-bag-sval2 v1.11.1
Downloaded lock_api v0.4.14
Downloaded write16 v1.0.0
Downloaded value-bag-serde1 v1.11.1
Downloaded unicode-width v0.1.11
Downloaded icu_normalizer_data v1.5.1
Downloaded yoke-derive v0.7.5
Downloaded schemars v1.0.4
Downloaded zerofrom v0.1.6
Downloaded zerovec-derive v0.10.3
Downloaded tree-sitter-html v0.23.2
Downloaded getrandom v0.3.3
Downloaded signals_receipts v0.2.0
Downloaded tracing v0.1.41
Downloaded zeroize v1.7.0
Downloaded tree-sitter-toml-ng v0.6.0
Downloaded unicode-ident v1.0.12
Downloaded tree-sitter-go v0.23.4
Downloaded sharded-slab v0.1.7
Downloaded similar v2.5.0
Downloaded uuid v1.16.0
Downloaded zerovec v0.10.4
Downloaded zerocopy v0.7.35
Downloaded winnow v0.7.13
Downloaded url v2.5.4
Downloaded object v0.36.7
Downloaded zlib-rs v0.5.2
Downloaded syn v1.0.109
Downloaded unicode-width v0.2.1
Downloaded icu_properties_data v1.5.1
Downloaded regex-syntax v0.6.29
Downloaded gimli v0.31.1
Downloaded tree-sitter-md v0.3.2
Downloaded tree-sitter-rust v0.23.2
Downloaded vcpkg v0.2.15
Downloaded syn v2.0.106
Downloaded tree-sitter-c v0.23.2
Downloaded regex-syntax v0.8.5
Downloaded tree-sitter-bash v0.23.3
Downloaded rustix v0.38.34
Downloaded unicode-normalization v0.1.23
Downloaded rustix v1.0.7
Downloaded jsonschema v0.30.0
Downloaded idna v1.0.3
Downloaded tree-sitter-ruby v0.23.1
Downloaded regex-automata v0.4.9
Downloaded hashbrown v0.15.5
Downloaded gix-pack v0.60.0
Downloaded unicode-segmentation v1.11.0
Downloaded mio v1.0.4
Downloaded inquire v0.7.5
Downloaded tree-sitter-python v0.23.4
Downloaded regex v1.11.1
Downloaded tree-sitter v0.24.4
Downloaded tracing-subscriber v0.3.19
Downloaded itertools v0.14.0
Downloaded jiff v0.2.15
Downloaded flate2 v1.1.1
Downloaded tree-sitter-typescript v0.23.2
Downloaded libc v0.2.174
Downloaded proptest v1.9.0
Downloaded sha1-checked v0.10.0
Downloaded tokio v1.47.1
Downloaded bstr v1.12.0
Downloaded icu_properties v1.5.1
Downloaded sha3 v0.10.8
Downloaded heapless v0.8.0
Downloaded tempfile v3.23.0
Downloaded structured-logger v1.0.4
Downloaded serde_json v1.0.140
Downloaded p384 v0.13.0
Downloaded gix-object v0.50.2
Downloaded timeago v0.4.2
Downloaded thread_local v1.1.9
Downloaded indexmap v2.2.6
Downloaded gix-diff v0.53.0
Downloaded zerofrom-derive v0.1.6
Downloaded yoke v0.7.5
Downloaded typenum v1.17.0
Downloaded tracing-core v0.1.34
Downloaded toml v0.9.5
Downloaded synstructure v0.13.1
Downloaded ssh-key v0.6.6
Downloaded radicle-surf v0.26.0
Downloaded icu_provider v1.5.0
Downloaded icu_locid_transform_data v1.5.1
Downloaded yansi v0.5.1
Downloaded writeable v0.5.5
Downloaded walkdir v2.5.0
Downloaded vsimd v0.8.0
Downloaded value-bag v1.11.1
Downloaded uuid-simd v0.8.0
Downloaded utf8parse v0.2.1
Downloaded unicode-display-width v0.3.0
Downloaded sval_nested v2.14.1
Downloaded sval_buffer v2.14.1
Downloaded sqlite3-sys v0.15.2
Downloaded sha2 v0.10.8
Downloaded sem_safe v0.2.0
Downloaded sec1 v0.7.3
Downloaded salsa20 v0.10.2
Downloaded rand_core v0.9.3
Downloaded rand_core v0.6.4
Downloaded libz-rs-sys v0.5.2
Downloaded libm v0.2.8
Downloaded lexopt v0.3.0
Downloaded fluent-uri v0.3.2
Downloaded bloomy v1.2.0
Downloaded xattr v1.3.1
Downloaded wait-timeout v0.2.1
Downloaded tree-sitter-json v0.24.8
Downloaded tree-sitter-css v0.23.1
Downloaded tar v0.4.40
Downloaded log v0.4.27
Downloaded gix-tempfile v18.0.0
Downloaded utf8_iter v1.0.4
Downloaded utf16_iter v1.0.5
Downloaded regex-automata v0.1.10
Downloaded toml_writer v1.0.2
Downloaded tinyvec v1.6.0
Downloaded typeid v1.0.3
Downloaded sval_dynamic v2.14.1
Downloaded socket2 v0.5.7
Downloaded thiserror-impl v1.0.69
Downloaded signature v1.6.4
Downloaded tinystr v0.7.6
Downloaded thiserror v1.0.69
Downloaded smallvec v1.15.1
Downloaded signature v2.2.0
Downloaded rand v0.9.2
Downloaded radicle-std-ext v0.2.0
Downloaded tree-sitter-highlight v0.24.4
Downloaded thiserror-impl v2.0.17
Downloaded thiserror v2.0.17
Downloaded signal-hook v0.3.18
Downloaded rsa v0.9.6
Downloaded portable-atomic v1.11.0
Downloaded litrs v0.4.1
Downloaded serde_derive v1.0.219
Downloaded rand v0.8.5
Downloaded noise-framework v0.4.0
Downloaded litemap v0.7.5
Downloaded indicatif v0.18.0
Downloaded gix-shallow v0.5.0
Downloaded spin v0.9.8
Downloaded snapbox v0.4.17
Downloaded itoa v1.0.11
Downloaded icu_normalizer v1.5.0
Downloaded gix-protocol v0.51.0
Downloaded ff v0.13.0
Downloaded newline-converter v0.3.0
Downloaded hmac v0.12.1
Downloaded gix-features v0.43.1
Downloaded test-log v0.2.18
Downloaded sval v2.14.1
Downloaded serde-untagged v0.1.7
Downloaded num-bigint-dig v0.8.4
Downloaded streaming-iterator v0.1.9
Downloaded signal-hook-registry v1.4.5
Downloaded serde_derive_internals v0.29.1
Downloaded icu_collections v1.5.0
Downloaded gix-revwalk v0.21.0
Downloaded gix-refspec v0.31.0
Downloaded gix-path v0.10.20
Downloaded gix-lock v18.0.0
Downloaded strsim v0.11.1
Downloaded stable_deref_trait v1.2.0
Downloaded socks5-client v0.4.1
Downloaded siphasher v1.0.1
Downloaded lazy_static v1.5.0
Downloaded human-panic v2.0.3
Downloaded clap_builder v4.5.44
Downloaded signal-hook-mio v0.2.4
Downloaded rustc-demangle v0.1.26
Downloaded prodash v30.0.1
Downloaded gix-hashtable v0.9.0
Downloaded ssh-cipher v0.2.0
Downloaded snapbox-macros v0.3.8
Downloaded shlex v1.3.0
Downloaded schemars_derive v1.0.4
Downloaded matchers v0.1.0
Downloaded home v0.5.9
Downloaded generic-array v0.14.7
Downloaded chrono v0.4.38
Downloaded siphasher v0.3.11
Downloaded serde_fmt v1.0.3
Downloaded referencing v0.30.0
Downloaded same-file v1.0.6
Downloaded rand_chacha v0.9.0
Downloaded gix-ref v0.53.1
Downloaded jobserver v0.1.31
Downloaded idna_adapter v1.2.0
Downloaded gix-url v0.32.0
Downloaded aho-corasick v1.1.3
Downloaded rusty-fork v0.3.1
Downloaded rand_xorshift v0.4.0
Downloaded form_urlencoded v1.2.1
Downloaded ref-cast-impl v1.0.24
Downloaded proc-macro2 v1.0.101
Downloaded jiff-static v0.2.15
Downloaded group v0.13.0
Downloaded ghash v0.5.1
Downloaded fnv v1.0.7
Downloaded aes-gcm v0.10.3
Downloaded rand_chacha v0.3.1
Downloaded qcheck v1.0.0
Downloaded ppv-lite86 v0.2.17
Downloaded parking_lot v0.12.5
Downloaded gix-trace v0.1.13
Downloaded pin-project-lite v0.2.16
Downloaded os_info v3.12.0
Downloaded inout v0.1.3
Downloaded gix-command v0.6.2
Downloaded num-bigint v0.4.6
Downloaded gix-traverse v0.47.0
Downloaded quote v1.0.41
Downloaded quick-error v1.2.3
Downloaded primeorder v0.13.6
Downloaded phf v0.11.3
Downloaded pem-rfc7468 v0.7.0
Downloaded p256 v0.13.2
Downloaded once_cell v1.21.3
Downloaded num-traits v0.2.19
Downloaded num-integer v0.1.46
Downloaded aes v0.8.4
Downloaded proc-macro-error-attr2 v2.0.0
Downloaded polyval v0.6.2
Downloaded pkg-config v0.3.30
Downloaded parking_lot_core v0.9.12
Downloaded nu-ansi-term v0.46.0
Downloaded gix-quote v0.6.0
Downloaded gix-config-value v0.15.1
Downloaded paste v1.0.15
Downloaded git-ref-format v0.6.0
Downloaded gix-sec v0.12.0
Downloaded gix-negotiate v0.21.0
Downloaded pbkdf2 v0.12.2
Downloaded gix-actor v0.35.4
Downloaded memchr v2.7.2
Downloaded heck v0.5.0
Downloaded gix-utils v0.3.0
Downloaded num-iter v0.1.45
Downloaded gix-packetline v0.19.1
Downloaded miniz_oxide v0.8.8
Downloaded fancy-regex v0.14.0
Downloaded elliptic-curve v0.13.8
Downloaded num-cmp v0.1.0
Downloaded hashbrown v0.14.3
Downloaded gix-credentials v0.30.0
Downloaded opaque-debug v0.3.1
Downloaded num v0.4.3
Downloaded normalize-line-endings v0.3.0
Downloaded emojis v0.6.4
Downloaded crypto-bigint v0.5.5
Downloaded bytes v1.10.1
Downloaded arc-swap v1.7.1
Downloaded icu_provider_macros v1.5.0
Downloaded gix-odb v0.70.0
Downloaded gix-date v0.10.5
Downloaded env_logger v0.11.8
Downloaded derive_more v2.0.1
Downloaded const-oid v0.9.6
Downloaded console v0.16.0
Downloaded memmap2 v0.9.8
Downloaded icu_locid v1.5.0
Downloaded gix-fs v0.16.1
Downloaded gix-chunk v0.4.11
Downloaded git-ref-format-macro v0.6.0
Downloaded git-ref-format-core v0.6.0
Downloaded der v0.7.9
Downloaded crossterm v0.25.0
Downloaded convert_case v0.7.1
Downloaded bitflags v2.9.1
Downloaded ascii v1.1.0
Downloaded anyhow v1.0.82
Downloaded gix-commitgraph v0.29.0
Downloaded crossterm v0.29.0
Downloaded colored v2.1.0
Downloaded clap v4.5.44
Downloaded bytesize v2.0.1
Downloaded anstream v0.6.13
Downloaded either v1.11.0
Downloaded amplify_syn v2.0.1
Downloaded gix-hash v0.19.0
Downloaded erased-serde v0.4.6
Downloaded email_address v0.2.9
Downloaded cbc v0.1.2
Downloaded byteorder v1.5.0
Downloaded bitflags v1.3.2
Downloaded dyn-clone v1.0.17
Downloaded bytecount v0.6.8
Downloaded borrow-or-share v0.2.2
Downloaded block-buffer v0.10.4
Downloaded base16ct v0.2.0
Downloaded autocfg v1.2.0
Downloaded gix-transport v0.48.0
Downloaded errno v0.3.13
Downloaded derive_more-impl v2.0.1
Downloaded bit-vec v0.8.0
Downloaded digest v0.10.7
Downloaded fastrand v2.3.0
Downloaded faster-hex v0.10.0
Downloaded env_filter v0.1.3
Downloaded anstyle-query v1.0.2
Downloaded aead v0.5.2
Downloaded displaydoc v0.2.5
Downloaded crossbeam-channel v0.5.15
Downloaded crc32fast v1.5.0
Downloaded clap_derive v4.5.41
Downloaded bit-set v0.8.0
Downloaded amplify_num v0.5.2
Downloaded cyphernet v0.5.2
Downloaded clap_complete v4.5.60
Downloaded base32 v0.4.0
Downloaded ec25519 v0.1.0
Downloaded data-encoding-macro v0.1.14
Downloaded cyphergraphy v0.3.0
Downloaded cipher v0.4.4
Downloaded blowfish v0.9.1
Downloaded base64 v0.22.1
Downloaded base-x v0.2.11
Downloaded data-encoding-macro-internal v0.1.12
Downloaded data-encoding v2.5.0
Downloaded cypheraddr v0.4.0
Downloaded ct-codecs v1.1.1
Downloaded clap_lex v0.7.5
Downloaded block-padding v0.3.3
Downloaded base64ct v1.6.0
Downloaded ctr v0.9.2
Downloaded crypto-common v0.1.6
Downloaded colorchoice v1.0.0
Downloaded bcrypt-pbkdf v0.10.0
Downloaded amplify v4.6.0
Downloaded ahash v0.8.11
Downloaded cpufeatures v0.2.12
Downloaded chacha20 v0.9.1
Downloaded cfg-if v1.0.0
Downloaded backtrace v0.3.75
Downloaded anstyle-parse v0.2.3
Downloaded anstyle v1.0.11
Downloaded addr2line v0.24.2
Downloaded linux-raw-sys v0.4.13
Downloaded libgit2-sys v0.17.0+1.8.1
Downloaded linux-raw-sys v0.9.4
Downloaded sqlite3-src v0.5.1
Downloaded libz-sys v1.1.16
Compiling libc v0.2.174
Compiling proc-macro2 v1.0.101
Compiling quote v1.0.41
Compiling unicode-ident v1.0.12
Checking cfg-if v1.0.0
Compiling shlex v1.3.0
Compiling version_check v0.9.4
Checking memchr v2.7.2
Compiling typenum v1.17.0
Checking getrandom v0.2.15
Compiling syn v2.0.106
Compiling jobserver v0.1.31
Compiling generic-array v0.14.7
Checking rand_core v0.6.4
Compiling cc v1.2.2
Compiling serde v1.0.219
Checking regex-syntax v0.8.5
Checking crypto-common v0.1.6
Checking aho-corasick v1.1.3
Checking smallvec v1.15.1
Compiling thiserror v2.0.17
Checking subtle v2.5.0
Checking once_cell v1.21.3
Checking regex-automata v0.4.9
Checking stable_deref_trait v1.2.0
Checking fastrand v2.3.0
Checking cpufeatures v0.2.12
Compiling parking_lot_core v0.9.12
Checking scopeguard v1.2.0
Checking lock_api v0.4.14
Checking block-buffer v0.10.4
Checking parking_lot v0.12.5
Checking digest v0.10.7
Compiling crc32fast v1.5.0
Checking bitflags v2.9.1
Checking byteorder v1.5.0
Checking tinyvec_macros v0.1.1
Checking tinyvec v1.6.0
Compiling typeid v1.0.3
Checking gix-trace v0.1.13
Checking home v0.5.9
Checking zlib-rs v0.5.2
Checking unicode-normalization v0.1.23
Checking gix-utils v0.3.0
Compiling synstructure v0.13.1
Checking libz-rs-sys v0.5.2
Checking bstr v1.12.0
Checking same-file v1.0.6
Checking walkdir v2.5.0
Checking flate2 v1.1.1
Checking prodash v30.0.1
Checking itoa v1.0.11
Compiling getrandom v0.3.3
Compiling heapless v0.8.0
Checking hash32 v0.3.1
Compiling serde_derive v1.0.219
Compiling thiserror-impl v2.0.17
Compiling zerofrom-derive v0.1.6
Compiling yoke-derive v0.7.5
Compiling zerovec-derive v0.10.3
Checking zerofrom v0.1.6
Checking yoke v0.7.5
Checking gix-validate v0.10.0
Compiling displaydoc v0.2.5
Checking gix-path v0.10.20
Checking gix-features v0.43.1
Checking zeroize v1.7.0
Checking zerovec v0.10.4
Compiling pkg-config v0.3.30
Checking litemap v0.7.5
Compiling icu_locid_transform_data v1.5.1
Compiling rustix v1.0.7
Checking writeable v0.5.5
Checking tinystr v0.7.6
Checking faster-hex v0.10.0
Checking icu_locid v1.5.0
Compiling icu_provider_macros v1.5.0
Compiling icu_properties_data v1.5.1
Checking linux-raw-sys v0.9.4
Checking sha1 v0.10.6
Checking icu_provider v1.5.0
Checking block-padding v0.3.3
Compiling icu_normalizer_data v1.5.1
Checking icu_locid_transform v1.5.0
Checking inout v0.1.3
Checking sha1-checked v0.10.0
Checking icu_collections v1.5.0
Compiling syn v1.0.109
Checking gix-hash v0.19.0
Checking icu_properties v1.5.1
Checking cipher v0.4.4
Checking utf16_iter v1.0.5
Checking utf8_iter v1.0.4
Checking write16 v1.0.0
Checking erased-serde v0.4.6
Checking serde_fmt v1.0.3
Checking value-bag-serde1 v1.11.1
Checking value-bag v1.11.1
Checking log v0.4.27
Checking percent-encoding v2.3.1
Compiling thiserror v1.0.69
Checking icu_normalizer v1.5.0
Compiling thiserror-impl v1.0.69
Checking idna_adapter v1.2.0
Checking idna v1.0.3
Checking form_urlencoded v1.2.1
Checking sha2 v0.10.8
Compiling vcpkg v0.2.15
Checking url v2.5.4
Checking tempfile v3.23.0
Compiling libz-sys v1.1.16
Checking universal-hash v0.5.1
Checking hashbrown v0.14.3
Compiling serde_json v1.0.140
Checking opaque-debug v0.3.1
Checking equivalent v1.0.1
Compiling autocfg v1.2.0
Compiling ref-cast v1.0.24
Checking ryu v1.0.17
Checking indexmap v2.2.6
Compiling num-traits v0.2.19
Compiling amplify_syn v2.0.1
Compiling libgit2-sys v0.17.0+1.8.1
Compiling ref-cast-impl v1.0.24
Checking signature v1.6.4
Checking ed25519 v1.5.3
Compiling amplify_derive v4.0.0
Checking aead v0.5.2
Checking ascii v1.1.0
Checking amplify_num v0.5.2
Checking dyn-clone v1.0.17
Checking ct-codecs v1.1.1
Checking ec25519 v0.1.0
Checking git-ref-format-core v0.6.0
Checking poly1305 v0.8.0
Checking chacha20 v0.9.1
Checking polyval v0.6.2
Checking amplify v4.6.0
Compiling sqlite3-src v0.5.1
Compiling serde_derive_internals v0.29.1
Checking hmac v0.12.1
Checking keccak v0.1.5
Checking cyphergraphy v0.3.0
Checking base64ct v1.6.0
Checking sha3 v0.10.8
Checking pem-rfc7468 v0.7.0
Checking pbkdf2 v0.12.2
Checking ghash v0.5.1
Checking ctr v0.9.2
Compiling schemars_derive v1.0.4
Checking aes v0.8.4
Checking rand v0.8.5
Compiling data-encoding v2.5.0
Checking base32 v0.4.0
Checking schemars v1.0.4
Checking cypheraddr v0.4.0
Compiling data-encoding-macro-internal v0.1.12
Checking qcheck v1.0.0
Checking aes-gcm v0.10.3
Checking ssh-encoding v0.2.0
Checking chacha20poly1305 v0.10.1
Checking blowfish v0.9.1
Checking cbc v0.1.2
Checking ssh-cipher v0.2.0
Checking bcrypt-pbkdf v0.10.0
Checking data-encoding-macro v0.1.14
Checking noise-framework v0.4.0
Checking socks5-client v0.4.1
Checking signature v2.2.0
Compiling crossbeam-utils v0.8.19
Checking base-x v0.2.11
Checking multibase v0.9.1
Checking ssh-key v0.6.6
Checking cyphernet v0.5.2
Checking radicle-ssh v0.10.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-ssh)
Checking crossbeam-channel v0.5.15
Checking lazy_static v1.5.0
Checking jiff v0.2.15
Checking nonempty v0.9.0
Checking siphasher v1.0.1
Checking anstyle-query v1.0.2
Checking radicle-git-metadata v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-git-metadata)
Checking radicle-dag v0.10.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-dag)
Checking winnow v0.7.13
Checking hashbrown v0.15.5
Checking utf8parse v0.2.1
Checking anstyle-parse v0.2.3
Checking gix-hashtable v0.9.0
Checking radicle-git-ref-format v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-git-ref-format)
Checking colorchoice v1.0.0
Checking iana-time-zone v0.1.60
Checking gix-date v0.10.5
Checking base64 v0.21.7
Checking gix-actor v0.35.4
Checking anstyle v1.0.11
Checking gix-object v0.50.2
Checking anstream v0.6.13
Checking chrono v0.4.38
Checking colored v2.1.0
Checking radicle-localtime v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-localtime)
Checking serde-untagged v0.1.7
Checking bytesize v2.0.1
Checking memmap2 v0.9.8
Checking fast-glob v0.3.3
Checking dunce v1.0.5
Checking tree-sitter-language v0.1.2
Checking gix-chunk v0.4.11
Checking gix-fs v0.16.1
Checking gix-commitgraph v0.29.0
Checking gix-tempfile v18.0.0
Checking mio v1.0.4
Checking gix-revwalk v0.21.0
Checking gix-quote v0.6.0
Checking sem_safe v0.2.0
Checking errno v0.3.13
Checking either v1.11.0
Checking shell-words v1.1.0
Checking gix-command v0.6.2
Checking signals_receipts v0.2.0
Compiling object v0.36.7
Compiling signal-hook v0.3.18
Checking gix-lock v18.0.0
Checking gix-url v0.32.0
Checking gix-config-value v0.15.1
Checking gix-sec v0.12.0
Checking signal-hook-registry v1.4.5
Checking adler2 v2.0.0
Checking gimli v0.31.1
Compiling rustix v0.38.34
Checking miniz_oxide v0.8.8
Checking gix-prompt v0.11.1
Checking addr2line v0.24.2
Checking radicle-signals v0.11.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-signals)
Checking gix-revision v0.35.0
Checking gix-traverse v0.47.0
Checking gix-diff v0.53.0
Checking mio v0.8.11
Checking gix-packetline v0.19.1
Compiling tree-sitter v0.24.4
Compiling anyhow v1.0.82
Checking rustc-demangle v0.1.26
Compiling unicode-segmentation v1.11.0
Compiling linux-raw-sys v0.4.13
Compiling convert_case v0.7.1
Checking backtrace v0.3.75
Checking gix-transport v0.48.0
Checking signal-hook-mio v0.2.4
Checking gix-pack v0.60.0
Checking gix-refspec v0.31.0
Checking gix-credentials v0.30.0
Checking gix-ref v0.53.1
Checking gix-shallow v0.5.0
Checking gix-negotiate v0.21.0
Compiling maybe-async v0.2.10
Checking regex v1.11.1
Compiling proc-macro-error-attr2 v2.0.0
Compiling portable-atomic v1.11.0
Checking arc-swap v1.7.1
Checking gix-odb v0.70.0
Compiling proc-macro-error2 v2.0.1
Checking gix-protocol v0.51.0
Compiling xattr v1.3.1
Compiling derive_more-impl v2.0.1
Compiling filetime v0.2.23
Checking uuid v1.16.0
Checking bitflags v1.3.2
Checking unicode-width v0.2.1
Checking bytes v1.10.1
Compiling litrs v0.4.1
Compiling document-features v0.2.11
Checking console v0.16.0
Checking crossterm v0.25.0
Checking derive_more v2.0.1
Compiling tar v0.4.40
Compiling git-ref-format-macro v0.6.0
Checking newline-converter v0.3.0
Checking snapbox-macros v0.3.8
Checking salsa20 v0.10.2
Checking fxhash v0.2.1
Checking normalize-line-endings v0.3.0
Checking unicode-width v0.1.11
Checking clap_lex v0.7.5
Checking strsim v0.11.1
Checking similar v2.5.0
Checking streaming-iterator v0.1.9
Checking sqlite3-sys v0.15.2
Compiling heck v0.5.0
Checking unit-prefix v0.5.1
Checking sqlite v0.32.0
Checking siphasher v0.3.11
Compiling clap_derive v4.5.41
Checking bloomy v1.2.0
Checking radicle-crypto v0.14.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-crypto)
Checking indicatif v0.18.0
Checking snapbox v0.4.17
Checking clap_builder v4.5.44
Checking inquire v0.7.5
Compiling radicle-surf v0.26.0
Checking scrypt v0.11.0
Checking git-ref-format v0.6.0
Checking crossterm v0.29.0
Checking unicode-display-width v0.3.0
Checking systemd-journal-logger v2.2.2
Checking serde_spanned v1.0.0
Checking toml_datetime v0.7.0
Compiling tree-sitter-c v0.23.2
Compiling tree-sitter-css v0.23.1
Compiling tree-sitter-rust v0.23.2
Compiling tree-sitter-typescript v0.23.2
Compiling tree-sitter-json v0.24.8
Compiling tree-sitter-toml-ng v0.6.0
Compiling tree-sitter-python v0.23.4
Compiling tree-sitter-html v0.23.2
Compiling tree-sitter-ruby v0.23.1
Compiling tree-sitter-go v0.23.4
Compiling tree-sitter-md v0.3.2
Compiling tree-sitter-bash v0.23.3
Checking toml_writer v1.0.2
Checking pin-project-lite v0.2.16
Checking radicle-std-ext v0.2.0
Checking tokio v1.47.1
Checking toml v0.9.5
Checking clap v4.5.44
Checking os_info v3.12.0
Checking yansi v0.5.1
Compiling radicle-cli v0.17.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cli)
Compiling radicle-node v0.16.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-node)
Checking diff v0.1.13
Checking pretty_assertions v1.4.0
Checking human-panic v2.0.3
Checking clap_complete v4.5.60
Checking structured-logger v1.0.4
Checking radicle-systemd v0.11.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-systemd)
Checking tree-sitter-highlight v0.24.4
Checking itertools v0.14.0
Checking num-integer v0.1.46
Compiling qcheck-macros v1.0.0
Checking socket2 v0.5.7
Checking lexopt v0.3.0
Compiling escargot v0.5.10
Checking timeago v0.4.2
Checking bit-vec v0.8.0
Checking bit-set v0.8.0
Checking num-bigint v0.4.6
Checking rand_core v0.9.3
Compiling ahash v0.8.11
Checking num-iter v0.1.45
Checking num-complex v0.4.6
Checking env_filter v0.1.3
Checking zerocopy v0.7.35
Checking borrow-or-share v0.2.2
Checking fluent-uri v0.3.2
Checking num-rational v0.4.2
Checking env_logger v0.11.8
Checking num v0.4.3
Checking phf_shared v0.11.3
Compiling test-log-macros v0.2.18
Checking wait-timeout v0.2.1
Checking outref v0.5.2
Compiling radicle-remote-helper v0.14.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-remote-helper)
Checking ppv-lite86 v0.2.17
Checking quick-error v1.2.3
Checking fnv v1.0.7
Checking vsimd v0.8.0
Compiling paste v1.0.15
Checking rand_chacha v0.9.0
Checking rusty-fork v0.3.1
Checking test-log v0.2.18
Checking phf v0.11.3
Checking uuid-simd v0.8.0
Checking referencing v0.30.0
Checking fraction v0.15.3
Checking rand_xorshift v0.4.0
Checking rand v0.9.2
Checking fancy-regex v0.14.0
Checking email_address v0.2.9
Checking num-cmp v0.1.0
Checking base64 v0.22.1
Checking bytecount v0.6.8
Checking unarray v0.1.4
Checking emojis v0.6.4
Checking jsonschema v0.30.0
Checking proptest v1.9.0
Checking git2 v0.19.0
Checking radicle-oid v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-oid)
Checking radicle-git-ext v0.11.0
Checking radicle-term v0.16.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-term)
Checking radicle-cob v0.17.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cob)
Checking radicle-core v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-core)
Checking radicle v0.20.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle)
Checking radicle-fetch v0.16.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-fetch)
Checking radicle-cli-test v0.13.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cli-test)
Checking radicle-schemars v0.6.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-schemars)
Checking radicle-protocol v0.4.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-protocol)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 44.33s
+ cargo build --all-targets --workspace
Compiling libc v0.2.174
Compiling cfg-if v1.0.0
Compiling shlex v1.3.0
Compiling memchr v2.7.2
Compiling typenum v1.17.0
Compiling serde v1.0.219
Compiling generic-array v0.14.7
Compiling jobserver v0.1.31
Compiling getrandom v0.2.15
Compiling cc v1.2.2
Compiling rand_core v0.6.4
Compiling crypto-common v0.1.6
Compiling regex-syntax v0.8.5
Compiling aho-corasick v1.1.3
Compiling smallvec v1.15.1
Compiling thiserror v2.0.17
Compiling subtle v2.5.0
Compiling stable_deref_trait v1.2.0
Compiling once_cell v1.21.3
Compiling cpufeatures v0.2.12
Compiling fastrand v2.3.0
Compiling regex-automata v0.4.9
Compiling scopeguard v1.2.0
Compiling lock_api v0.4.14
Compiling parking_lot_core v0.9.12
Compiling block-buffer v0.10.4
Compiling bitflags v2.9.1
Compiling digest v0.10.7
Compiling parking_lot v0.12.5
Compiling byteorder v1.5.0
Compiling tinyvec_macros v0.1.1
Compiling tinyvec v1.6.0
Compiling crc32fast v1.5.0
Compiling gix-trace v0.1.13
Compiling typeid v1.0.3
Compiling home v0.5.9
Compiling erased-serde v0.4.6
Compiling serde_fmt v1.0.3
Compiling zlib-rs v0.5.2
Compiling unicode-normalization v0.1.23
Compiling value-bag-serde1 v1.11.1
Compiling same-file v1.0.6
Compiling walkdir v2.5.0
Compiling gix-utils v0.3.0
Compiling zerofrom v0.1.6
Compiling value-bag v1.11.1
Compiling prodash v30.0.1
Compiling bstr v1.12.0
Compiling itoa v1.0.11
Compiling log v0.4.27
Compiling gix-validate v0.10.0
Compiling yoke v0.7.5
Compiling gix-path v0.10.20
Compiling zerovec v0.10.4
Compiling hash32 v0.3.1
Compiling zeroize v1.7.0
Compiling heapless v0.8.0
Compiling tinystr v0.7.6
Compiling getrandom v0.3.3
Compiling writeable v0.5.5
Compiling litemap v0.7.5
Compiling libz-rs-sys v0.5.2
Compiling flate2 v1.1.1
Compiling icu_locid v1.5.0
Compiling gix-features v0.43.1
Compiling faster-hex v0.10.0
Compiling linux-raw-sys v0.9.4
Compiling icu_provider v1.5.0
Compiling icu_locid_transform_data v1.5.1
Compiling sha1 v0.10.6
Compiling block-padding v0.3.3
Compiling inout v0.1.3
Compiling icu_locid_transform v1.5.0
Compiling sha1-checked v0.10.0
Compiling icu_properties_data v1.5.1
Compiling icu_collections v1.5.0
Compiling gix-hash v0.19.0
Compiling rustix v1.0.7
Compiling icu_properties v1.5.1
Compiling cipher v0.4.4
Compiling icu_normalizer_data v1.5.1
Compiling utf8_iter v1.0.4
Compiling write16 v1.0.0
Compiling utf16_iter v1.0.5
Compiling percent-encoding v2.3.1
Compiling form_urlencoded v1.2.1
Compiling sha2 v0.10.8
Compiling libz-sys v1.1.16
Compiling thiserror v1.0.69
Compiling universal-hash v0.5.1
Compiling equivalent v1.0.1
Compiling opaque-debug v0.3.1
Compiling hashbrown v0.14.3
Compiling ryu v1.0.17
Compiling libgit2-sys v0.17.0+1.8.1
Compiling indexmap v2.2.6
Compiling signature v1.6.4
Compiling icu_normalizer v1.5.0
Compiling serde_json v1.0.140
Compiling idna_adapter v1.2.0
Compiling idna v1.0.3
Compiling url v2.5.4
Compiling tempfile v3.23.0
Compiling ed25519 v1.5.3
Compiling ref-cast v1.0.24
Compiling aead v0.5.2
Compiling ct-codecs v1.1.1
Compiling ascii v1.1.0
Compiling amplify_num v0.5.2
Compiling dyn-clone v1.0.17
Compiling ec25519 v0.1.0
Compiling num-traits v0.2.19
Compiling poly1305 v0.8.0
Compiling git-ref-format-core v0.6.0
Compiling chacha20 v0.9.1
Compiling amplify v4.6.0
Compiling polyval v0.6.2
Compiling cyphergraphy v0.3.0
Compiling sqlite3-src v0.5.1
Compiling hmac v0.12.1
Compiling base64ct v1.6.0
Compiling keccak v0.1.5
Compiling pbkdf2 v0.12.2
Compiling pem-rfc7468 v0.7.0
Compiling sha3 v0.10.8
Compiling ghash v0.5.1
Compiling ctr v0.9.2
Compiling aes v0.8.4
Compiling rand v0.8.5
Compiling base32 v0.4.0
Compiling qcheck v1.0.0
Compiling cypheraddr v0.4.0
Compiling aes-gcm v0.10.3
Compiling ssh-encoding v0.2.0
Compiling schemars v1.0.4
Compiling chacha20poly1305 v0.10.1
Compiling cbc v0.1.2
Compiling blowfish v0.9.1
Compiling data-encoding v2.5.0
Compiling data-encoding-macro v0.1.14
Compiling bcrypt-pbkdf v0.10.0
Compiling ssh-cipher v0.2.0
Compiling noise-framework v0.4.0
Compiling socks5-client v0.4.1
Compiling base-x v0.2.11
Compiling signature v2.2.0
Compiling multibase v0.9.1
Compiling ssh-key v0.6.6
Compiling cyphernet v0.5.2
Compiling radicle-ssh v0.10.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-ssh)
Compiling crossbeam-utils v0.8.19
Compiling crossbeam-channel v0.5.15
Compiling lazy_static v1.5.0
Compiling jiff v0.2.15
Compiling nonempty v0.9.0
Compiling anstyle-query v1.0.2
Compiling siphasher v1.0.1
Compiling radicle-dag v0.10.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-dag)
Compiling radicle-git-metadata v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-git-metadata)
Compiling winnow v0.7.13
Compiling utf8parse v0.2.1
Compiling hashbrown v0.15.5
Compiling gix-hashtable v0.9.0
Compiling anstyle-parse v0.2.3
Compiling radicle-git-ref-format v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-git-ref-format)
Compiling base64 v0.21.7
Compiling gix-date v0.10.5
Compiling gix-actor v0.35.4
Compiling anstyle v1.0.11
Compiling iana-time-zone v0.1.60
Compiling colorchoice v1.0.0
Compiling anstream v0.6.13
Compiling chrono v0.4.38
Compiling gix-object v0.50.2
Compiling colored v2.1.0
Compiling radicle-localtime v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-localtime)
Compiling serde-untagged v0.1.7
Compiling bytesize v2.0.1
Compiling memmap2 v0.9.8
Compiling tree-sitter-language v0.1.2
Compiling dunce v1.0.5
Compiling fast-glob v0.3.3
Compiling gix-chunk v0.4.11
Compiling adler2 v2.0.0
Compiling gix-commitgraph v0.29.0
Compiling gix-fs v0.16.1
Compiling gix-revwalk v0.21.0
Compiling gix-tempfile v18.0.0
Compiling mio v1.0.4
Compiling gix-quote v0.6.0
Compiling sem_safe v0.2.0
Compiling errno v0.3.13
Compiling unicode-segmentation v1.11.0
Compiling shell-words v1.1.0
Compiling either v1.11.0
Compiling gix-command v0.6.2
Compiling signals_receipts v0.2.0
Compiling gix-lock v18.0.0
Compiling gix-url v0.32.0
Compiling gix-config-value v0.15.1
Compiling gix-sec v0.12.0
Compiling signal-hook-registry v1.4.5
Compiling gimli v0.31.1
Compiling signal-hook v0.3.18
Compiling gix-prompt v0.11.1
Compiling object v0.36.7
Compiling addr2line v0.24.2
Compiling radicle-signals v0.11.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-signals)
Compiling gix-revision v0.35.0
Compiling gix-traverse v0.47.0
Compiling miniz_oxide v0.8.8
Compiling gix-diff v0.53.0
Compiling gix-packetline v0.19.1
Compiling mio v0.8.11
Compiling tree-sitter v0.24.4
Compiling rustc-demangle v0.1.26
Compiling backtrace v0.3.75
Compiling signal-hook-mio v0.2.4
Compiling rustix v0.38.34
Compiling sqlite3-sys v0.15.2
Compiling sqlite v0.32.0
Compiling radicle-crypto v0.14.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-crypto)
Compiling gix-transport v0.48.0
Compiling gix-pack v0.60.0
Compiling gix-refspec v0.31.0
Compiling gix-credentials v0.30.0
Compiling gix-ref v0.53.1
Compiling gix-shallow v0.5.0
Compiling convert_case v0.7.1
Compiling gix-negotiate v0.21.0
Compiling regex v1.11.1
Compiling arc-swap v1.7.1
Compiling gix-odb v0.70.0
Compiling gix-protocol v0.51.0
Compiling derive_more-impl v2.0.1
Compiling xattr v1.3.1
Compiling uuid v1.16.0
Compiling filetime v0.2.23
Compiling unicode-width v0.2.1
Compiling bytes v1.10.1
Compiling bitflags v1.3.2
Compiling console v0.16.0
Compiling crossterm v0.25.0
Compiling tar v0.4.40
Compiling git-ref-format-macro v0.6.0
Compiling derive_more v2.0.1
Compiling anyhow v1.0.82
Compiling portable-atomic v1.11.0
Compiling newline-converter v0.3.0
Compiling snapbox-macros v0.3.8
Compiling salsa20 v0.10.2
Compiling fxhash v0.2.1
Compiling siphasher v0.3.11
Compiling unit-prefix v0.5.1
Compiling normalize-line-endings v0.3.0
Compiling unicode-width v0.1.11
Compiling similar v2.5.0
Compiling streaming-iterator v0.1.9
Compiling strsim v0.11.1
Compiling clap_lex v0.7.5
Compiling clap_builder v4.5.44
Compiling snapbox v0.4.17
Compiling inquire v0.7.5
Compiling bloomy v1.2.0
Compiling indicatif v0.18.0
Compiling scrypt v0.11.0
Compiling radicle-surf v0.26.0
Compiling crossterm v0.29.0
Compiling git-ref-format v0.6.0
Compiling unicode-display-width v0.3.0
Compiling systemd-journal-logger v2.2.2
Compiling toml_datetime v0.7.0
Compiling serde_spanned v1.0.0
Compiling tree-sitter-bash v0.23.3
Compiling tree-sitter-css v0.23.1
Compiling tree-sitter-python v0.23.4
Compiling tree-sitter-ruby v0.23.1
Compiling tree-sitter-toml-ng v0.6.0
Compiling tree-sitter-rust v0.23.2
Compiling tree-sitter-json v0.24.8
Compiling tree-sitter-go v0.23.4
Compiling tree-sitter-html v0.23.2
Compiling tree-sitter-c v0.23.2
Compiling tree-sitter-typescript v0.23.2
Compiling tree-sitter-md v0.3.2
Compiling radicle-std-ext v0.2.0
Compiling pin-project-lite v0.2.16
Compiling toml_writer v1.0.2
Compiling tokio v1.47.1
Compiling toml v0.9.5
Compiling clap v4.5.44
Compiling os_info v3.12.0
Compiling diff v0.1.13
Compiling yansi v0.5.1
Compiling radicle-node v0.16.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-node)
Compiling radicle-cli v0.17.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cli)
Compiling pretty_assertions v1.4.0
Compiling human-panic v2.0.3
Compiling clap_complete v4.5.60
Compiling structured-logger v1.0.4
Compiling radicle-systemd v0.11.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-systemd)
Compiling tree-sitter-highlight v0.24.4
Compiling itertools v0.14.0
Compiling num-integer v0.1.46
Compiling socket2 v0.5.7
Compiling lexopt v0.3.0
Compiling timeago v0.4.2
Compiling bit-vec v0.8.0
Compiling escargot v0.5.10
Compiling bit-set v0.8.0
Compiling num-bigint v0.4.6
Compiling rand_core v0.9.3
Compiling num-iter v0.1.45
Compiling num-complex v0.4.6
Compiling env_filter v0.1.3
Compiling zerocopy v0.7.35
Compiling num-rational v0.4.2
Compiling borrow-or-share v0.2.2
Compiling fluent-uri v0.3.2
Compiling num v0.4.3
Compiling ahash v0.8.11
Compiling env_logger v0.11.8
Compiling phf_shared v0.11.3
Compiling wait-timeout v0.2.1
Compiling quick-error v1.2.3
Compiling fnv v1.0.7
Compiling ppv-lite86 v0.2.17
Compiling radicle-remote-helper v0.14.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-remote-helper)
Compiling vsimd v0.8.0
Compiling outref v0.5.2
Compiling rand_chacha v0.9.0
Compiling rusty-fork v0.3.1
Compiling uuid-simd v0.8.0
Compiling test-log v0.2.18
Compiling phf v0.11.3
Compiling referencing v0.30.0
Compiling fraction v0.15.3
Compiling rand v0.9.2
Compiling rand_xorshift v0.4.0
Compiling fancy-regex v0.14.0
Compiling email_address v0.2.9
Compiling num-cmp v0.1.0
Compiling git2 v0.19.0
Compiling unarray v0.1.4
Compiling base64 v0.22.1
Compiling bytecount v0.6.8
Compiling jsonschema v0.30.0
Compiling proptest v1.9.0
Compiling emojis v0.6.4
Compiling radicle-oid v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-oid)
Compiling radicle-core v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-core)
Compiling radicle-cob v0.17.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cob)
Compiling radicle-git-ext v0.11.0
Compiling radicle-term v0.16.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-term)
Compiling radicle v0.20.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle)
Compiling radicle-fetch v0.16.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-fetch)
Compiling radicle-protocol v0.4.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-protocol)
Compiling radicle-cli-test v0.13.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cli-test)
Compiling radicle-schemars v0.6.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-schemars)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 54.71s
+ cargo doc --workspace --no-deps --all-features
Checking regex-automata v0.4.9
Compiling syn v1.0.109
Checking idna v1.0.3
Compiling num-traits v0.2.19
Checking radicle-ssh v0.10.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-ssh)
Checking url v2.5.4
Checking git2 v0.19.0
Checking proptest v1.9.0
Checking bstr v1.12.0
Compiling amplify_syn v2.0.1
Checking gix-validate v0.10.0
Checking gix-path v0.10.20
Checking gix-features v0.43.1
Checking git-ref-format-core v0.6.0
Compiling data-encoding-macro-internal v0.1.12
Checking gix-hash v0.19.0
Compiling amplify_derive v4.0.0
Checking radicle-git-ref-format v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-git-ref-format)
Checking gix-date v0.10.5
Checking data-encoding-macro v0.1.14
Checking radicle-oid v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-oid)
Checking multibase v0.9.1
Checking gix-actor v0.35.4
Checking gix-hashtable v0.9.0
Checking radicle-dag v0.10.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-dag)
Checking radicle-git-metadata v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-git-metadata)
Checking gix-object v0.50.2
Checking chrono v0.4.38
Checking gix-commitgraph v0.29.0
Checking radicle-localtime v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-localtime)
Checking gix-fs v0.16.1
Checking gix-tempfile v18.0.0
Checking gix-revwalk v0.21.0
Checking gix-quote v0.6.0
Checking gix-command v0.6.2
Checking gix-lock v18.0.0
Checking gix-url v0.32.0
Checking gix-config-value v0.15.1
Checking gix-traverse v0.47.0
Checking gix-revision v0.35.0
Checking gix-prompt v0.11.1
Checking radicle-signals v0.11.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-signals)
Checking gix-diff v0.53.0
Checking amplify v4.6.0
Checking gix-packetline v0.19.1
Checking regex v1.11.1
Checking gix-pack v0.60.0
Checking gix-transport v0.48.0
Checking cyphergraphy v0.3.0
Checking tree-sitter v0.24.4
Checking cypheraddr v0.4.0
Checking noise-framework v0.4.0
Checking gix-refspec v0.31.0
Checking socks5-client v0.4.1
Checking git-ref-format v0.6.0
Checking gix-credentials v0.30.0
Checking cyphernet v0.5.2
Checking gix-ref v0.53.1
Checking radicle-crypto v0.14.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-crypto)
Checking gix-shallow v0.5.0
Checking gix-negotiate v0.21.0
Checking radicle-core v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-core)
Checking radicle-cob v0.17.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cob)
Checking radicle-git-ext v0.11.0
Checking gix-odb v0.70.0
Checking uuid v1.16.0
Checking gix-protocol v0.51.0
Compiling radicle-cli v0.17.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cli)
Checking human-panic v2.0.3
Checking radicle v0.20.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle)
Checking radicle-surf v0.26.0
Checking tree-sitter-toml-ng v0.6.0
Checking radicle-term v0.16.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-term)
Checking tree-sitter-highlight v0.24.4
Checking radicle-systemd v0.11.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-systemd)
Documenting radicle-systemd v0.11.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-systemd)
Documenting radicle v0.20.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle)
Documenting radicle-term v0.16.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-term)
Documenting radicle-cob v0.17.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cob)
Documenting radicle-core v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-core)
Documenting radicle-crypto v0.14.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-crypto)
Documenting radicle-signals v0.11.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-signals)
Documenting radicle-oid v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-oid)
Documenting radicle-git-ref-format v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-git-ref-format)
Documenting radicle-localtime v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-localtime)
Documenting radicle-ssh v0.10.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-ssh)
Documenting radicle-dag v0.10.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-dag)
Documenting radicle-git-metadata v0.1.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-git-metadata)
Checking radicle-fetch v0.16.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-fetch)
Documenting radicle-cli v0.17.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cli)
Documenting radicle-cli-test v0.13.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cli-test)
Checking radicle-protocol v0.4.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-protocol)
Documenting radicle-protocol v0.4.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-protocol)
Documenting radicle-node v0.16.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-node)
Documenting radicle-fetch v0.16.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-fetch)
Documenting radicle-schemars v0.6.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-schemars)
Documenting radicle-remote-helper v0.14.0 (/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-remote-helper)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 9.66s
Generated /4abdf646-ecbb-43f1-9982-11fb433438f7/w/target/doc/radicle/index.html and 20 other files
+ cargo test --workspace --no-fail-fast
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.17s
Running unittests src/lib.rs (target/debug/deps/radicle-17353fb59e43dcb4)
running 236 tests
test canonical::formatter::test::ascii_control_characters ... ok
test canonical::formatter::test::securesystemslib_asserts ... ok
test canonical::formatter::test::ordered_nested_object ... ok
test cob::cache::migrations::_2::tests::test_patch_json_deserialization ... ok
test cob::common::test::test_color ... ok
test cob::common::test::test_emojis ... ok
test cob::common::test::test_title ... ok
test cob::cache::tests::test_migrate_to ... ok
test cob::cache::tests::test_check_version ... ok
test cob::cache::migrations::_2::tests::test_migration_2 ... ok
test cob::identity::test::prop_json_eq_str ... ok
test cob::identity::test::test_identity_redact_revision ... ok
test cob::identity::test::test_identity_remove_delegate_concurrent ... ok
test cob::identity::test::test_identity_reject_concurrent ... ok
test cob::identity::test::test_identity_update_rejected ... ok
test cob::identity::test::test_identity_updates ... ok
test cob::issue::cache::tests::test_counts ... ok
test cob::issue::cache::tests::test_get ... ok
test cob::issue::cache::tests::test_is_empty ... ok
test cob::identity::test::test_valid_identity ... ok
test cob::issue::cache::tests::test_list ... ok
test cob::issue::cache::tests::test_list_by_status ... ok
test cob::issue::cache::tests::test_remove ... ok
test cob::identity::test::test_identity_updates_concurrent ... ok
test cob::issue::test::test_embeds ... ok
test cob::issue::test::test_embeds_edit ... ok
test cob::issue::test::test_invalid_actions ... ok
test cob::identity::test::test_identity_updates_concurrent_outdated ... ok
test cob::issue::test::test_invalid_cob ... ok
test cob::issue::test::test_invalid_tx_reference ... ok
test cob::issue::test::test_invalid_tx ... ok
test cob::issue::test::test_concurrency ... ok
test cob::issue::test::test_issue_all ... ok
test cob::issue::test::test_issue_comment ... ok
test cob::issue::test::test_issue_comment_redact ... ok
test cob::issue::test::test_issue_create_and_assign ... ok
test cob::issue::test::test_issue_create_and_get ... ok
test cob::issue::test::test_issue_create_and_change_state ... ok
test cob::issue::test::test_issue_create_and_reassign ... ok
test cob::issue::test::test_issue_create_and_unassign ... ok
test cob::issue::test::test_issue_edit_description ... ok
test cob::issue::test::test_issue_edit ... ok
test cob::issue::test::test_issue_multilines ... ok
test cob::issue::test::test_issue_state_serde ... ok
test cob::issue::test::test_ordering ... ok
test cob::patch::actions::test::test_review_edit ... ok
test cob::issue::test::test_issue_label ... ok
test cob::issue::test::test_issue_react ... ok
test cob::patch::cache::tests::test_get ... ok
test cob::patch::cache::tests::test_is_empty ... ok
test cob::issue::test::test_issue_reply ... ok
test cob::patch::cache::tests::test_list ... ok
test cob::patch::cache::tests::test_list_by_status ... ok
test cob::patch::encoding::review::test::test_review_deserialize_summary_migration_null_summary ... ok
test cob::patch::encoding::review::test::test_review_deserialize_summary_migration_with_summary ... ok
test cob::patch::encoding::review::test::test_review_deserialize_summary_migration_without_summary ... ok
test cob::patch::encoding::review::test::test_review_deserialize_summary_v2 ... ok
test cob::patch::encoding::review::test::test_review_summary ... ok
test cob::patch::test::test_json ... ok
test cob::patch::test::test_json_serialization ... ok
test cob::patch::cache::tests::test_remove ... ok
test cob::patch::test::test_patch_create_and_get ... ok
test cob::patch::cache::tests::test_counts ... ok
test cob::patch::test::test_patch_discussion ... ok
test cob::patch::test::test_patch_merge ... ok
test cob::patch::test::test_patch_redact ... ok
test cob::patch::test::test_patch_review_comment ... ok
test cob::patch::test::test_patch_review ... ok
test cob::patch::test::test_patch_review_duplicate ... ok
test cob::patch::test::test_patch_review_edit ... ok
test cob::patch::test::test_patch_review_edit_comment ... ok
test cob::patch::test::test_patch_review_remove_summary ... ok
test cob::patch::test::test_reactions_json_serialization ... ok
test cob::patch::test::test_revision_edit_redact ... ok
test cob::patch::test::test_revision_reaction ... ok
test cob::patch::test::test_revision_review_merge_redacted ... ok
test cob::stream::tests::test_all_from ... ok
test cob::patch::test::test_patch_review_revision_redact ... ok
test cob::stream::tests::test_all_from_until ... ok
test cob::stream::tests::test_all_until ... ok
test cob::stream::tests::test_regression_from_until ... ok
test cob::stream::tests::test_from_until ... ok
test cob::thread::tests::test_comment_edit_missing ... ok
test cob::thread::tests::test_comment_edit_redacted ... ok
test cob::thread::tests::test_comment_redact_missing ... ok
test cob::thread::tests::test_duplicate_comments ... ok
test cob::thread::tests::test_edit_comment ... ok
test cob::thread::tests::test_redact_comment ... ok
test cob::thread::tests::test_timeline ... ok
test git::canonical::quorum::test::merge_base_commutative ... ok
test git::canonical::quorum::test::test_merge_bases ... ok
test cob::patch::test::test_patch_update ... ok
test git::canonical::rules::tests::test_deserialization ... ok
test git::canonical::rules::tests::test_deserialize_extensions ... ok
test git::canonical::rules::tests::test_order ... ok
test git::canonical::rules::tests::test_roundtrip ... ok
test git::canonical::rules::tests::test_canonical ... ok
test git::canonical::rules::tests::test_rule_validate_success ... ok
test git::canonical::rules::tests::test_special_branches ... ok
test git::canonical::tests::test_commit_quorum_fork_of_a_fork ... ok
test git::canonical::tests::test_commit_quorum_forked_merge_commits ... ok
test git::canonical::tests::test_commit_quorum_groups ... ok
test git::canonical::tests::test_commit_quorum_linear ... ok
test git::canonical::tests::test_commit_quorum_merges ... ok
test git::canonical::tests::test_commit_quorum_single ... ok
test git::canonical::tests::test_commit_quorum_three_way_fork ... ok
test git::canonical::tests::test_commit_quorum_two_way_fork ... ok
test git::canonical::tests::test_quorum_different_types ... ok
test git::canonical::rules::tests::test_rule_validate_failures ... ok
test git::canonical::tests::test_tag_quorum ... ok
test git::test::test_version_from_str ... ok
test git::test::test_version_ord ... ok
test identity::did::test::test_did_encode_decode ... ok
test identity::did::test::test_did_vectors ... ok
test git::canonical::tests::test_quorum_properties ... ok
test cob::thread::tests::prop_ordering ... ok
test identity::doc::test::test_canonical_doc ... ok
test identity::doc::test::test_duplicate_dids ... ok
test identity::doc::test::test_future_version_error ... ok
test identity::doc::test::test_is_valid_version ... ok
test identity::doc::test::test_canonical_example ... ok
test identity::doc::test::test_not_found ... ok
test identity::doc::test::test_parse_version ... ok
test identity::doc::test::test_visibility_json ... ok
test identity::doc::update::test::test_can_update_crefs ... ok
test identity::doc::update::test::test_cannot_include_default_branch_rule ... ok
test identity::doc::update::test::test_default_branch_rule_exists_after_verification ... ok
test identity::project::test::test_project_name ... ok
test node::address::store::test::test_alias ... ok
test node::address::store::test::test_disconnected ... ok
test node::address::store::test::test_disconnected_ban ... ok
test node::address::store::test::test_empty ... ok
test node::address::store::test::test_entries ... ok
test node::address::store::test::test_get_none ... ok
test node::address::store::test::test_insert_and_get ... ok
test node::address::store::test::test_insert_and_remove ... ok
test node::address::store::test::test_insert_and_update ... ok
test node::address::store::test::test_insert_duplicate ... ok
test node::address::store::test::test_node_aliases ... ok
test node::address::store::test::test_remove_nothing ... ok
test node::command::test::command_result ... ok
test node::config::test::partial ... ok
test node::db::test::test_version ... ok
test node::features::test::test_operations ... ok
test node::notifications::store::test::test_branch_notifications ... ok
test node::notifications::store::test::test_clear ... ok
test node::notifications::store::test::test_cob_notifications ... ok
test node::notifications::store::test::test_counts_by_repo ... ok
test node::notifications::store::test::test_duplicate_notifications ... ok
test node::notifications::store::test::test_notification_status ... ok
test node::policy::store::test::test_follow_and_unfollow_node ... ok
test node::policy::store::test::test_node_aliases ... ok
test node::policy::store::test::test_node_policies ... ok
test node::policy::store::test::test_node_policy ... ok
test node::policy::store::test::test_repo_policies ... ok
test node::policy::store::test::test_repo_policy ... ok
test node::policy::store::test::test_seed_and_unseed_repo ... ok
test node::policy::store::test::test_update_alias ... ok
test node::policy::store::test::test_update_scope ... ok
test node::refs::store::test::test_count ... ok
test node::refs::store::test::test_set_and_delete ... ok
test node::refs::store::test::test_set_and_get ... ok
test node::routing::test::test_count ... ok
test node::routing::test::test_entries ... ok
test node::routing::test::test_insert_and_get ... ok
test identity::doc::test::test_max_delegates ... ok
test node::routing::test::test_insert_and_get_resources ... ok
test node::routing::test::test_insert_duplicate ... ok
test node::routing::test::test_insert_existing_updated_time ... ok
test node::routing::test::test_len ... ok
test node::routing::test::test_insert_and_remove ... ok
test node::routing::test::test_remove_many ... ok
test node::routing::test::test_remove_redundant ... ok
test node::routing::test::test_update_existing_multi ... ok
test node::sync::announce::test::all_synced_nodes_are_preferred_seeds ... ok
test node::sync::announce::test::announcer_adapts_target_to_reach ... ok
test node::routing::test::test_prune ... ok
test node::sync::announce::test::announcer_preferred_seeds_or_replica_factor ... ok
test node::sync::announce::test::announcer_reached_max_replication_target ... ok
test node::sync::announce::test::announcer_reached_min_replication_target ... ok
test node::sync::announce::test::announcer_synced_with_unknown_node ... ok
test node::sync::announce::test::announcer_reached_preferred_seeds ... ok
test node::sync::announce::test::announcer_with_replication_factor_zero_and_preferred_seeds ... ok
test node::sync::announce::test::announcer_timed_out ... ok
test node::sync::announce::test::construct_node_appears_in_multiple_input_sets ... ok
test node::sync::announce::test::construct_only_preferred_seeds_provided ... ok
test node::sync::announce::test::cannot_construct_announcer ... ok
test node::sync::announce::test::local_node_in_multiple_sets ... ok
test node::sync::announce::test::invariant_progress_should_match_state ... ok
test node::sync::announce::test::local_node_in_preferred_seeds ... ok
test node::sync::announce::test::local_node_in_synced_set ... ok
test node::sync::announce::test::local_node_only_in_all_sets_results_in_no_seeds_error ... ok
test node::sync::announce::test::local_node_in_unsynced_set ... ok
test node::sync::announce::test::preferred_seeds_already_synced ... ok
test node::sync::announce::test::synced_with_local_node_is_ignored ... ok
test node::sync::announce::test::synced_with_same_node_multiple_times ... ok
test node::sync::announce::test::timed_out_after_reaching_success ... ok
test node::sync::fetch::test::all_nodes_are_candidates ... ok
test node::sync::fetch::test::could_not_reach_target ... ok
test node::sync::fetch::test::ignores_duplicates_and_local_node ... ok
test node::sync::fetch::test::all_nodes_are_fetchable ... ok
test node::sync::fetch::test::preferred_seeds_target_returned_over_replicas ... ok
test node::sync::fetch::test::reaches_target_of_max_replicas ... ok
test node::sync::fetch::test::reaches_target_of_preferred_seeds ... ok
test node::sync::test::ensure_replicas_construction ... ok
test node::sync::test::replicas_constrain_to ... ok
test node::test::test_alias ... ok
test node::test::test_command_result ... ok
test node::test::test_user_agent ... ok
test node::sync::fetch::test::reaches_target_of_replicas ... ok
test node::timestamp::tests::test_timestamp_max ... ok
test profile::test::canonicalize_home ... ok
test profile::test::test_config ... ok
test rad::tests::test_checkout ... ok
test rad::tests::test_fork ... ok
test rad::tests::test_init ... ok
test profile::config::test::schema ... ok
test storage::git::tests::test_references_of ... ok
test storage::git::tests::test_sign_refs ... ok
test storage::git::transport::local::url::test::test_url_parse ... ok
test storage::git::transport::local::url::test::test_url_to_string ... ok
test storage::git::transport::remote::url::test::test_url_parse ... ok
test storage::refs::tests::prop_canonical_roundtrip ... ok
test storage::git::tests::test_remote_refs ... ok
test storage::tests::test_storage ... ok
test test::assert::test::assert_with_message ... ok
test test::assert::test::test_assert_no_move ... ok
test test::assert::test::test_assert_panic_0 - should panic ... ok
test test::assert::test::test_assert_panic_1 - should panic ... ok
test test::assert::test::test_assert_panic_2 - should panic ... ok
test test::assert::test::test_assert_succeed ... ok
test test::assert::test::test_panic_message ... ok
test version::test::test_version ... ok
test storage::refs::tests::test_rid_verification ... ok
test identity::doc::test::prop_encode_decode ... ok
test cob::patch::cache::tests::test_find_by_revision ... ok
test result: ok. 236 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 2.56s
Running unittests src/lib.rs (target/debug/deps/radicle_cli-edbd69a9593a9f0b)
running 46 tests
test commands::block::args::test::should_not_parse ... ok
test commands::block::args::test::should_parse_nid ... ok
test commands::block::args::test::should_parse_rid ... ok
test commands::clone::args::test::should_parse_rid_urn ... ok
test commands::clone::args::test::should_parse_rid_non_urn ... ok
test commands::clone::args::test::should_parse_rid_url ... ok
test commands::cob::args::test::should_allow_log_json_format ... ok
test commands::cob::args::test::should_allow_log_pretty_format ... ok
test commands::cob::args::test::should_allow_update_json_format ... ok
test commands::fork::args::test::should_not_parse_rid_url ... ok
test commands::cob::args::test::should_allow_show_json_format ... ok
test commands::cob::args::test::should_not_allow_show_pretty_format ... ok
test commands::cob::args::test::should_not_allow_update_pretty_format ... ok
test commands::fork::args::test::should_parse_rid_urn ... ok
test commands::id::args::test::should_not_parse_into_payload - should panic ... ok
test commands::fork::args::test::should_parse_rid_non_urn ... ok
test commands::id::args::test::should_not_clobber_payload_args ... ok
test commands::id::args::test::should_parse_into_payload ... ok
test commands::id::args::test::should_parse_multiple_payloads ... ok
test commands::id::args::test::should_not_parse_single_payload ... ok
test commands::id::args::test::should_not_parse_single_payloads ... ok
test commands::init::args::test::should_parse_rid_non_urn ... ok
test commands::id::args::test::should_parse_single_payload ... ok
test commands::init::args::test::should_not_parse_rid_url ... ok
test commands::inspect::test::test_tree ... ok
test commands::init::args::test::should_parse_rid_urn ... ok
test commands::patch::review::builder::tests::test_review_comments_basic ... ok
test commands::patch::review::builder::tests::test_review_comments_before ... ok
test commands::patch::review::builder::tests::test_review_comments_split_hunk ... ok
test commands::publish::args::test::should_not_parse_rid_url ... ok
test commands::patch::review::builder::tests::test_review_comments_multiline ... ok
test commands::publish::args::test::should_parse_rid_non_urn ... ok
test git::pretty_diff::test::test_pretty ... ignored
test commands::publish::args::test::should_parse_rid_urn ... ok
test git::unified_diff::test::test_diff_content_encode_decode_content ... ok
test commands::watch::args::test::should_parse_ref_str ... ok
test git::ddiff::tests::diff_encode_decode_ddiff_hunk ... ok
test git::unified_diff::test::test_diff_encode_decode_diff ... ok
test terminal::args::test::should_not_parse ... ok
test terminal::args::test::should_parse_rid ... ok
test terminal::format::test::test_bytes ... ok
test terminal::format::test::test_strip_comments ... ok
test terminal::args::test::should_parse_nid ... ok
test terminal::patch::test::test_edit_display_message ... ok
test terminal::patch::test::test_create_display_message ... ok
test terminal::patch::test::test_update_display_message ... ok
test result: ok. 45 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.01s
Running unittests src/main.rs (target/debug/deps/rad-4ed1fbcef7a99fe7)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running tests/commands.rs (target/debug/deps/commands-2321cc2144c5d04a)
running 111 tests
test framework_home ... ok
test git_push_and_fetch ... ok
test git_push_amend ... ok
test git_push_canonical_annotated_tags ... ok
test git_push_canonical_lightweight_tags ... ok
test git_push_force_with_lease ... ok
test git_push_diverge ... ok
test rad_auth ... ok
test rad_auth_errors ... ok
test rad_block ... ok
test rad_checkout ... ok
test git_tag ... ok
test git_push_rollback ... ok
test git_push_converge ... ok
test rad_clone ... ok
test rad_clone_bare ... ok
test rad_clean ... ok
test rad_clone_all ... ok
test rad_clone_connect ... ok
test rad_clone_unknown ... ok
test rad_clone_directory ... ok
test rad_cob_multiset ... ok
test rad_cob_log ... ok
test rad_cob_migrate ... ok
test rad_clone_partial_fail ... ok
test rad_cob_operations ... ok
test rad_cob_show ... ok
test rad_cob_update_identity ... ok
test rad_config ... ok
test rad_cob_update ... ok
test rad_diff ... ok
test rad_help ... ok
test rad_id_collaboration ... ignored, slow
test rad_id ... ok
test rad_id_conflict ... ok
test rad_id_private ... ok
test rad_id_multi_delegate ... ok
test rad_id_threshold ... ok
test rad_id_threshold_soft_fork ... ok
test rad_id_unauthorized_delegate ... ok
test rad_id_unknown_field ... ok
test rad_id_update_delete_field ... ok
test rad_init ... ignored, part of many other tests
test rad_init_bare ... ok
test rad_init_detached_head ... ok
test rad_init_existing ... ok
test rad_init_existing_bare ... ok
test rad_init_no_git ... ok
test rad_init_no_seed ... ok
test rad_init_private ... ok
test rad_inbox ... ok
test rad_fetch ... ok
test rad_init_private_no_seed ... ok
test rad_fork ... ok
test rad_init_private_clone ... ok
test rad_init_sync_not_connected ... ok
test rad_init_private_clone_seed ... ok
test rad_init_private_seed ... ok
test rad_init_sync_preferred ... ok
test rad_init_with_existing_remote ... ok
test rad_inspect ... ok
test rad_issue ... ok
test rad_jj_bare ... ok
test rad_jj_colocated_patch ... ok
test rad_key_mismatch ... ok
test rad_issue_list ... ok
test rad_merge_after_update ... ok
test rad_merge_no_ff ... ok
test rad_merge_via_push ... ok
test rad_node_connect ... ok
test rad_node_connect_without_address ... ok
test rad_patch ... ok
test rad_node ... ok
test rad_patch_ahead_behind ... ok
test rad_patch_change_base ... ok
test rad_patch_checkout ... ok
test rad_patch_checkout_revision ... ok
test rad_patch_checkout_force ... ok
test rad_patch_detached_head ... ok
test rad_patch_diff ... ok
test rad_init_sync_and_clone ... ok
test rad_init_sync_timeout ... ok
test rad_patch_draft ... ok
test rad_patch_edit ... ok
test rad_patch_fetch_2 ... FAILED
test rad_patch_merge_draft ... ok
test rad_patch_fetch_1 ... ok
test rad_patch_delete ... ok
test rad_patch_revert_merge ... ok
test rad_patch_update ... ok
test rad_patch_open_explore ... ok
test rad_patch_via_push ... FAILED
test rad_publish ... ok
test rad_review_by_hunk ... ok
test rad_seed_and_follow ... ok
test rad_seed_many ... ok
test rad_remote ... ok
test rad_self ... ok
test rad_sync_without_node ... ok
test rad_unseed ... ok
test rad_push_and_pull_patches ... ok
test rad_warn_old_nodes ... ok
test rad_unseed_many ... ok
test rad_watch ... ok
test rad_sync ... ok
test test_clone_without_seeds ... ok
test test_cob_deletion ... ok
test test_cob_replication ... ok
test rad_workflow ... ok
test rad_patch_pull_update ... ok
test test_replication_via_seed ... ok
failures:
---- rad_patch_fetch_2 stdout ----
1769468195 test: Using PATH ["/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cli/target/debug", "/usr/local/cargo/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/tmp/.tmpen9tAV/alice/work"]
1769468195 test: rad-init.md: Running `/4abdf646-ecbb-43f1-9982-11fb433438f7/w/target/debug/rad` with ["init", "--name", "heartwood", "--description", "Radicle Heartwood Protocol & Stack", "--no-confirm", "--public", "-v"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-init.md: Running `/4abdf646-ecbb-43f1-9982-11fb433438f7/w/target/debug/rad` with ["init"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-init.md: Running `/4abdf646-ecbb-43f1-9982-11fb433438f7/w/target/debug/rad` with ["ls"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-init.md: Running `/4abdf646-ecbb-43f1-9982-11fb433438f7/w/target/debug/rad` with ["node", "inventory"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: Using PATH ["/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cli/target/debug", "/usr/local/cargo/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/tmp/.tmpen9tAV/alice/work"]
1769468196 test: rad-patch-fetch-2.md: Running `git` with ["checkout", "-b", "alice/1", "-q"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-patch-fetch-2.md: Running `git` with ["commit", "--allow-empty", "-m", "Changes #1", "-q"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-patch-fetch-2.md: Running `git` with ["push", "rad", "-o", "patch.message=Changes", "HEAD:refs/patches"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-patch-fetch-2.md: Running `git` with ["checkout", "master", "-q"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-patch-fetch-2.md: Running `git` with ["branch", "-D", "alice/1", "-q"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-patch-fetch-2.md: Running `git` with ["update-ref", "-d", "refs/remotes/rad/alice/1"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-patch-fetch-2.md: Running `git` with ["update-ref", "-d", "refs/remotes/rad/patches/5e2dedcc5d515fcbc1cca483d3376609fe889bfb"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-patch-fetch-2.md: Running `git` with ["gc", "--prune=now"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-patch-fetch-2.md: Running `git` with ["branch", "-r"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-patch-fetch-2.md: Running `git` with ["pull"] in `/tmp/.tmpen9tAV/alice/work`..
1769468196 test: rad-patch-fetch-2.md: Running `git` with ["branch", "-r"] in `/tmp/.tmpen9tAV/alice/work`..
thread 'rad_patch_fetch_2' panicked at crates/radicle-cli-test/src/lib.rs:503:36:
--- Expected
++++ actual: stdout
1 - rad/HEAD -> rad/master
2 1 | rad/master
3 2 | rad/patches/5e2dedcc5d515fcbc1cca483d3376609fe889bfb
Exit status: 0
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
---- rad_patch_via_push stdout ----
1769468197 test: Using PATH ["/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cli/target/debug", "/usr/local/cargo/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/tmp/.tmpquCxFv/alice/work"]
1769468197 test: rad-init.md: Running `/4abdf646-ecbb-43f1-9982-11fb433438f7/w/target/debug/rad` with ["init", "--name", "heartwood", "--description", "Radicle Heartwood Protocol & Stack", "--no-confirm", "--public", "-v"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: rad-init.md: Running `/4abdf646-ecbb-43f1-9982-11fb433438f7/w/target/debug/rad` with ["init"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: rad-init.md: Running `/4abdf646-ecbb-43f1-9982-11fb433438f7/w/target/debug/rad` with ["ls"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: rad-init.md: Running `/4abdf646-ecbb-43f1-9982-11fb433438f7/w/target/debug/rad` with ["node", "inventory"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: Using PATH ["/4abdf646-ecbb-43f1-9982-11fb433438f7/w/crates/radicle-cli/target/debug", "/usr/local/cargo/bin", "/usr/local/sbin", "/usr/local/bin", "/usr/sbin", "/usr/bin", "/sbin", "/bin", "/tmp/.tmpquCxFv/alice/work"]
1769468197 test: rad-patch-via-push.md: Running `git` with ["checkout", "-b", "feature/1"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: rad-patch-via-push.md: Running `git` with ["commit", "-a", "-m", "Add things", "-q", "--allow-empty"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: rad-patch-via-push.md: Running `git` with ["push", "-o", "patch.message=Add things #1", "-o", "patch.message=See commits for details.", "rad", "HEAD:refs/patches"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: rad-patch-via-push.md: Running `/4abdf646-ecbb-43f1-9982-11fb433438f7/w/target/debug/rad` with ["patch", "show", "6035d2f582afbe01ff23ea87528ae523d76875b6"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: rad-patch-via-push.md: Running `git` with ["branch", "-vv"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: rad-patch-via-push.md: Running `git` with ["status", "--short", "--branch"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: rad-patch-via-push.md: Running `git` with ["fetch"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: rad-patch-via-push.md: Running `git` with ["push"] in `/tmp/.tmpquCxFv/alice/work`..
1769468197 test: rad-patch-via-push.md: Running `git` with ["show-ref"] in `/tmp/.tmpquCxFv/alice/work`..
thread 'rad_patch_via_push' panicked at crates/radicle-cli-test/src/lib.rs:503:36:
--- Expected
++++ actual: stdout
1 1 | 42d894a83c9c356552a57af09ccdbd5587a99045 refs/heads/feature/1
2 2 | f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354 refs/heads/master
3 - f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354 refs/remotes/rad/HEAD
4 3 | f2de534b5e81d7c6e2dcaf58c3dd91573c0a0354 refs/remotes/rad/master
5 4 | 42d894a83c9c356552a57af09ccdbd5587a99045 refs/remotes/rad/patches/6035d2f582afbe01ff23ea87528ae523d76875b6
Exit status: 0
failures:
rad_patch_fetch_2
rad_patch_via_push
test result: FAILED. 107 passed; 2 failed; 2 ignored; 0 measured; 0 filtered out; finished in 69.17s
error: test failed, to rerun pass `-p radicle-cli --test commands`
Running unittests src/lib.rs (target/debug/deps/radicle_cli_test-271eee1f80c41c0d)
running 3 tests
test tests::test_parse ... ok
test tests::test_run ... ok
test tests::test_example_spaced_brackets ... ok
test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_cob-84c4e41a69174119)
running 8 tests
test object::tests::test_serde ... ok
test tests::git::list_cobs ... ok
test tests::git::update_cob ... ok
test tests::git::roundtrip ... ok
test type_name::test::valid_typenames ... ok
test tests::invalid_parse_refstr ... ok
test tests::parse_refstr ... ok
test tests::git::traverse_cobs ... ok
test result: ok. 8 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.01s
Running unittests src/lib.rs (target/debug/deps/radicle_core-ef9dfc09b2e6d61d)
running 2 tests
test repo::test::assert_prop_roundtrip_parse ... ok
test repo::serde_impls::test::assert_prop_roundtrip_serde_json ... ok
test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_crypto-2cdf91f3179b3527)
running 12 tests
test ssh::fmt::test::test_key ... ok
test ssh::fmt::test::test_fingerprint ... ok
test ssh::keystore::tests::test_init_no_passphrase ... ok
test ssh::test::prop_encode_decode_sk ... ok
test ssh::test::test_agent_encoding_remove ... ok
test ssh::test::test_agent_encoding_sign ... ok
test tests::prop_encode_decode ... ok
test tests::test_e25519_dh ... ok
test tests::test_encode_decode ... ok
test tests::prop_key_equality ... ok
test ssh::keystore::tests::test_signer ... ok
test ssh::keystore::tests::test_init_passphrase ... ok
test result: ok. 12 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.94s
Running unittests src/lib.rs (target/debug/deps/radicle_dag-05f7457e7194a495)
running 20 tests
test tests::test_complex ... ok
test tests::test_diamond ... ok
test tests::test_contains ... ok
test tests::test_cycle ... ok
test tests::test_dependencies ... ok
test tests::test_fold_diamond ... ok
test tests::test_fold_multiple_roots ... ok
test tests::test_fold_reject ... ok
test tests::test_fold_sorting_1 ... ok
test tests::test_fold_sorting_2 ... ok
test tests::test_is_empty ... ok
test tests::test_get ... ok
test tests::test_prune_1 ... ok
test tests::test_prune_2 ... ok
test tests::test_prune_by_sorting ... ok
test tests::test_remove ... ok
test tests::test_len ... ok
test tests::test_merge_1 ... ok
test tests::test_merge_2 ... ok
test tests::test_siblings ... ok
test result: ok. 20 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_fetch-55beca16093fba7c)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_git_metadata-ccdb6d430095148e)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_git_ref_format-b808ac22aa3ddce7)
running 9 tests
test test::pattern ... ok
test test::qualified ... ok
test test::component_invalid - should panic ... ok
test test::component ... ok
test test::refname ... ok
test test::qualified_invalid - should panic ... ok
test test::refname_invalid - should panic ... ok
test test::qualified_pattern ... ok
test test::qualified_pattern_invalid - should panic ... ok
test result: ok. 9 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_localtime-b324a722f6463902)
running 1 test
test serde_impls::test::test_localtime ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_node-9731215c7afa8b2d)
running 74 tests
test reactor::timer::tests::test_next ... ok
test reactor::timer::tests::test_wake ... ok
test reactor::timer::tests::test_wake_exact ... ok
test control::tests::test_control_socket ... ok
test fingerprint::tests::matching ... ok
test control::tests::test_seed_unseed ... ok
test tests::e2e::missing_default_branch ... ok
test tests::e2e::test_catchup_on_refs_announcements ... ok
test tests::e2e::missing_delegate_default_branch ... ok
test tests::e2e::test_background_foreground_fetch ... ok
test tests::e2e::test_channel_reader_limit ... ok
test tests::e2e::test_clone ... ok
test tests::e2e::test_dont_fetch_owned_refs ... ok
test tests::e2e::test_connection_crossing ... ok
test tests::e2e::test_fetch_followed_remotes ... ok
test tests::e2e::test_concurrent_fetches ... ok
test tests::e2e::test_fetch_preserve_owned_refs ... ok
test tests::e2e::test_fetch_unseeded ... ok
test tests::e2e::test_fetch_up_to_date ... ok
test tests::e2e::test_inventory_sync_basic ... ok
test tests::e2e::test_fetch_emits_canonical_ref_update ... ok
test tests::e2e::test_large_fetch ... ok
test tests::e2e::test_migrated_clone ... ok
test tests::e2e::test_missing_remote ... ok
test tests::e2e::test_multiple_offline_inits ... ok
test tests::e2e::test_non_fastforward_sigrefs ... ok
test tests::e2e::test_outdated_delegate_sigrefs ... ok
test tests::e2e::test_outdated_sigrefs ... ok
test tests::e2e::test_replication ... ok
test tests::e2e::test_replication_invalid ... ok
test tests::e2e::test_inventory_sync_bridge ... ok
test tests::e2e::test_inventory_sync_ring ... ok
test tests::e2e::test_replication_ref_in_sigrefs ... ok
test tests::e2e::test_inventory_sync_star ... ok
test tests::test_announcement_rebroadcast ... ok
test tests::test_announcement_rebroadcast_duplicates ... ok
test tests::test_announcement_rebroadcast_timestamp_filtered ... ok
test tests::test_announcement_relay ... ok
test tests::test_connection_kept_alive ... ok
test tests::test_disconnecting_unresponsive_peer ... ok
test tests::test_fetch_missing_inventory_on_gossip ... ok
test tests::test_fetch_missing_inventory_on_schedule ... ok
test tests::test_inbound_connection ... ok
test tests::test_inventory_decode ... ok
test tests::test_init_and_seed ... ok
test tests::test_inventory_relay ... ok
test tests::test_inventory_relay_bad_timestamp ... ok
test tests::test_inventory_sync ... ok
test tests::test_maintain_connections ... ok
test tests::test_maintain_connections_failed_attempt ... ok
test tests::test_maintain_connections_transient ... ok
test tests::test_outbound_connection ... ok
test tests::test_persistent_peer_connect ... ok
test tests::test_inventory_pruning ... ok
test tests::test_persistent_peer_reconnect_success ... ok
test tests::test_persistent_peer_reconnect_attempt ... ok
test tests::test_ping_response ... ok
test tests::test_queued_fetch_from_ann_same_rid ... ok
test tests::test_queued_fetch_from_command_same_rid ... ok
test tests::test_queued_fetch_max_capacity ... ok
test tests::test_redundant_connect ... ok
test tests::test_refs_announcement_followed ... ok
test tests::test_refs_announcement_fetch_trusted_no_inventory ... ok
test tests::test_refs_announcement_no_subscribe ... ok
test tests::test_refs_announcement_offline ... ok
test tests::test_refs_announcement_relay_private ... ok
test tests::test_refs_announcement_relay_public ... ok
test tests::test_refs_synced_event ... ok
test tests::test_seed_repo_subscribe ... ok
test wire::test::test_inventory_ann_with_extension ... ok
test wire::test::test_pong_message_with_extension ... ok
test tests::test_seeding ... ok
test tests::prop_inventory_exchange_dense ... ok
test tests::test_announcement_message_amplification ... ok
test result: ok. 74 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 17.23s
Running unittests src/main.rs (target/debug/deps/radicle_node-ccb4efa548845ab3)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_oid-d6586171c5e5bfc1)
running 10 tests
test fmt::test::fixture ... ok
test fmt::test::zero ... ok
test fmt::test::git2 ... ok
test git2::test::zero ... ok
test gix::test::zero ... ok
test str::test::fixture ... ok
test fmt::test::gix ... ok
test str::test::git2_roundtrip ... ok
test str::test::gix_roundrip ... ok
test str::test::zero ... ok
test result: ok. 10 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_protocol-2eb9deac47943445)
running 99 tests
test deserializer::test::test_unparsed ... ok
test deserializer::test::prop_decode_next ... ok
test deserializer::test::test_decode_next ... ok
test fetcher::test::queue::properties::capacity::capacity_reached_returns_same_item ... ok
test fetcher::test::queue::properties::dequeue::drained_queue_returns_none ... ok
test fetcher::test::queue::properties::dequeue::empty_queue_returns_none ... ok
test fetcher::test::queue::properties::dequeue::enables_reenqueue ... ok
test fetcher::test::queue::properties::equality::reflexive ... ok
test fetcher::test::queue::properties::equality::symmetric ... ok
test fetcher::test::queue::properties::equality::transitive ... ok
test fetcher::test::queue::properties::capacity::rejection ... ok
test fetcher::test::queue::properties::fifo::interleaved_operations ... ok
test fetcher::test::queue::properties::merge::combines_refs ... ok
test fetcher::test::queue::properties::merge::different_rid_accepted ... ok
test fetcher::test::queue::properties::capacity::bounded ... ok
test fetcher::test::queue::properties::fifo::ordering ... ok
test fetcher::test::queue::properties::merge::does_not_increase_queue_length ... ok
test fetcher::test::queue::properties::merge::longer_timeout_preserved ... ok
test fetcher::test::queue::properties::merge::empty_refs_fetches_all ... ok
test fetcher::test::queue::unit::capacity_takes_precedence_over_merge_for_new_items ... ok
test fetcher::test::queue::unit::empty_refs_at_items_can_be_equal ... ok
test fetcher::test::queue::unit::max_timeout_accepted ... ok
test fetcher::test::queue::unit::merge_preserves_position_in_queue ... ok
test fetcher::test::queue::unit::zero_timeout_accepted ... ok
test fetcher::test::state::command::cancel::cancellation_is_isolated ... ok
test fetcher::test::state::command::cancel::non_existent_returns_unexpected ... ok
test fetcher::test::state::command::cancel::ongoing_and_queued ... ok
test fetcher::test::state::command::cancel::single_ongoing ... ok
test fetcher::test::state::command::fetch::fetch_after_previous_completed ... ok
test fetcher::test::state::command::fetch::fetch_at_capacity_enqueues ... ok
test fetcher::test::state::command::fetch::fetch_different_repo_same_node_within_capacity ... ok
test fetcher::test::state::command::fetch::fetch_duplicate_returns_already_fetching ... ok
test fetcher::test::state::command::fetch::fetch_queue_merge_empty_refs_fetches_all ... ok
test fetcher::test::state::command::fetch::fetch_queue_merge_takes_longer_timeout ... ok
test fetcher::test::state::command::fetch::fetch_queue_merges_already_queued ... ok
test fetcher::test::state::command::fetch::fetch_queue_rejected_capacity_reached ... ok
test fetcher::test::state::command::fetch::fetch_same_repo_different_nodes_queues_second ... ok
test fetcher::test::state::command::fetch::fetch_same_repo_different_refs_enqueues ... ok
test fetcher::test::state::command::fetch::fetch_start_first_fetch_for_node ... ok
test fetcher::test::state::command::fetched::complete_one_of_multiple ... ok
test fetcher::test::state::command::fetched::complete_single_ongoing ... ok
test fetcher::test::state::command::fetched::complete_then_dequeue_fifo ... ok
test fetcher::test::state::command::fetched::non_existent_returns_not_found ... ok
test fetcher::test::state::concurrent::fetched_then_cancel ... ok
test fetcher::test::state::concurrent::interleaved_operations ... ok
test fetcher::test::state::config::high_concurrency ... ok
test fetcher::test::state::config::min_queue_size ... ok
test fetcher::test::state::dequeue::cannot_dequeue_while_node_at_capacity ... ok
test fetcher::test::state::dequeue::empty_queue_returns_none ... ok
test fetcher::test::state::dequeue::maintains_fifo_order ... ok
test fetcher::test::state::invariant::queue_integrity_after_merge ... ok
test fetcher::test::queue::properties::merge::succeed_when_at_capacity ... ok
test fetcher::test::state::multinode::independent_queues ... ok
test service::filter::test::compatible ... ok
test service::filter::test::test_parameters ... ok
test service::filter::test::test_sizes ... ok
test service::gossip::store::test::test_announced ... ok
test service::limiter::test::test_limitter_different_rates ... ok
test service::limiter::test::test_limitter_multi ... ok
test service::limiter::test::test_limitter_refill ... ok
test fetcher::test::state::multinode::high_count ... ok
test service::message::tests::test_inventory_limit ... ok
test service::message::tests::prop_refs_announcement_signing ... ok
test service::message::tests::test_ref_remote_limit ... ok
test wire::frame::test::test_encode_git_large ... ok
test wire::frame::test::test_stream_id ... ok
test wire::message::tests::prop_message_decoder ... ok
test wire::message::tests::prop_roundtrip_address ... ok
test wire::message::tests::prop_roundtrip_message ... ok
test wire::message::tests::prop_zero_bytes_encode_decode ... ok
test wire::message::tests::test_inv_ann_max_size ... ok
test wire::message::tests::test_node_ann_max_size ... ok
test wire::message::tests::test_ping_encode_size_overflow - should panic ... ok
test wire::message::tests::test_pingpong_encode_max_size ... ok
test wire::message::tests::test_pong_encode_size_overflow - should panic ... ok
test service::message::tests::test_node_announcement_validate ... ok
test wire::tests::prop_oid ... ok
test wire::tests::prop_roundtrip_filter ... ok
test wire::tests::prop_roundtrip_publickey ... ok
test wire::tests::prop_roundtrip_refs ... ok
test wire::tests::prop_roundtrip_repoid ... ok
test wire::tests::prop_roundtrip_signed_refs ... ok
test wire::tests::prop_roundtrip_tuple ... ok
test wire::tests::prop_roundtrip_u16 ... ok
test wire::tests::prop_roundtrip_u32 ... ok
test wire::tests::prop_roundtrip_u64 ... ok
test wire::tests::prop_roundtrip_vec ... ok
test wire::tests::prop_signature ... ok
test wire::tests::prop_string ... ok
test wire::tests::test_alias ... ok
test wire::tests::test_bounded_vec_limit ... ok
test wire::tests::test_filter_invalid ... ok
test wire::tests::test_string ... ok
test wire::varint::test::prop_roundtrip_varint ... ok
test wire::varint::test::test_encode_overflow - should panic ... ok
test wire::varint::test::test_encoding ... ok
test wire::message::tests::test_refs_ann_max_size ... ok
test fetcher::test::queue::properties::capacity::restored_after_dequeue ... ok
test fetcher::test::queue::properties::merge::same_rid_merges_anywhere_in_queue ... ok
test result: ok. 99 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 7.76s
Running unittests src/main.rs (target/debug/deps/git_remote_rad-403bd130a4109572)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/main.rs (target/debug/deps/radicle_schemars-3814598870a0c326)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_signals-bb66f1d798396f61)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_ssh-ced840136c32d8e4)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_systemd-36e3b0b1253d2879)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/lib.rs (target/debug/deps/radicle_term-fac874e5460320c3)
running 21 tests
test ansi::tests::colors_disabled ... ok
test cell::test::test_width ... ok
test ansi::tests::wrapping ... ok
test element::test::test_spaced ... ok
test ansi::tests::colors_enabled ... ok
test element::test::test_width ... ok
test element::test::test_truncate ... ok
test table::test::test_table ... ok
test table::test::test_table_border_truncated ... ok
test table::test::test_table_border_maximized ... ok
test table::test::test_table_truncate ... ok
test table::test::test_table_border ... ok
test table::test::test_table_unicode ... ok
test table::test::test_table_unicode_truncate ... ok
test table::test::test_truncate ... ok
test textarea::test::test_wrapping ... ok
test textarea::test::test_wrapping_code_block ... ok
test vstack::test::test_vstack ... ok
test textarea::test::test_wrapping_fenced_block ... ok
test vstack::test::test_vstack_maximize ... ok
test textarea::test::test_wrapping_paragraphs ... ok
test result: ok. 21 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle
running 1 test
test crates/radicle/src/cob/patch/encoding/review.rs - cob::patch::encoding::review::Review (line 23) ... ignored
test result: ok. 0 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_cli
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_cli_test
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_cob
running 1 test
test crates/radicle-cob/src/backend/stable.rs - backend::stable::with_advanced_timestamp (line 56) ... ignored
test result: ok. 0 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_core
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_crypto
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_dag
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_fetch
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_git_metadata
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_git_ref_format
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_localtime
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_node
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_oid
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_protocol
running 6 tests
test crates/radicle-protocol/src/bounded.rs - bounded::BoundedVec<T,N>::max (line 96) ... ok
test crates/radicle-protocol/src/bounded.rs - bounded::BoundedVec<T,N>::truncate (line 50) ... ok
test crates/radicle-protocol/src/bounded.rs - bounded::BoundedVec<T,N>::push (line 122) ... ok
test crates/radicle-protocol/src/bounded.rs - bounded::BoundedVec<T,N>::collect_from (line 30) ... ok
test crates/radicle-protocol/src/bounded.rs - bounded::BoundedVec<T,N>::unbound (line 149) ... ok
test crates/radicle-protocol/src/bounded.rs - bounded::BoundedVec<T,N>::with_capacity (line 66) ... ok
test result: ok. 6 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.33s
Doc-tests radicle_signals
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_ssh
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_systemd
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Doc-tests radicle_term
running 1 test
test crates/radicle-term/src/table.rs - table (line 4) ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.16s
error: 1 target failed:
`-p radicle-cli --test commands`
Exit code: 101
{
"response": "finished",
"result": "failure"
}