Compare commits

..

208 Commits

Author SHA1 Message Date
Karolin Varner
6b3bebb9b9 fix: CI issues under Darwin 2024-12-08 13:57:43 +01:00
Jacek Galowicz
9948169127 rp systemd unit file: introduce and test 2024-12-08 09:43:35 +01:00
Jacek Galowicz
eced56bd70 rp: Add exchange-config command
This is similar to `rosenpass exchange`/`rosenpass exchange-config`.
It's however slightly different to the configuration file models the `rp
exchange` command line.
2024-12-08 09:43:35 +01:00
Jacek Galowicz
df1e195b5d rp: set allowed-ips as routes
Prepare the rp app for a systemd unit file that sets up wireguard
connections.
2024-12-08 09:43:35 +01:00
Jacek Galowicz
e1e280c4c5 rp: Add ip parameter to exchange command
Prepare the `rp` app for a systemd unit that sets up a wireguard connection.
2024-12-08 09:43:35 +01:00
Jacek Galowicz
06cd178977 rosenpass systemd unit file: introduce and test 2024-12-08 09:43:35 +01:00
dependabot[bot]
e2c46f1ff0 build(deps): bump clap from 4.5.21 to 4.5.22
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.21 to 4.5.22.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.21...clap_complete-v4.5.22)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-04 11:40:25 +01:00
Karolin Varner
c8b804b39d build(deps): bump tokio from 1.41.1 to 1.42.0 (#517) 2024-12-04 11:40:14 +01:00
dependabot[bot]
e56798b04c build(deps): bump tokio from 1.41.1 to 1.42.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.41.1 to 1.42.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.41.1...tokio-1.42.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-04 11:40:04 +01:00
Karolin Varner
b76d18e3c8 build(deps): bump anyhow from 1.0.93 to 1.0.94 (#516) 2024-12-04 11:39:54 +01:00
dependabot[bot]
a9792c3143 build(deps): bump anyhow from 1.0.93 to 1.0.94
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.93 to 1.0.94.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.93...1.0.94)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-03 23:26:40 +00:00
Karolin Varner
cb2c1c12ee Dev/karo/docs and unit tests (#512)
Some checks failed
Nix / Build x86_64-linux.proverif-patched (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass (push) Has been cancelled
Nix / Build aarch64-linux.rosenpass (push) Has been cancelled
Nix / Build aarch64-linux.rp (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-static (push) Has been cancelled
Nix / Build x86_64-linux.rp-static (push) Has been cancelled
Nix / Build x86_64-linux.whitepaper (push) Has been cancelled
Nix / Run Nix checks on x86_64-linux (push) Has been cancelled
Nix / Upload whitepaper x86_64-linux (push) Has been cancelled
QC / prettier (push) Has been cancelled
QC / Shellcheck (push) Has been cancelled
QC / Rust Format (push) Has been cancelled
QC / cargo-clippy (push) Has been cancelled
QC / cargo-doc (push) Has been cancelled
QC / cargo-test (macos-13) (push) Has been cancelled
QC / cargo-test (ubuntu-latest) (push) Has been cancelled
QC / cargo-test-nix-devshell-x86_64-linux (push) Has been cancelled
QC / cargo-fuzz (push) Has been cancelled
QC / codecov (push) Has been cancelled
Nix / Build x86_64-darwin.release-package (push) Has been cancelled
Nix / Build i686-linux.default (push) Has been cancelled
Nix / Build i686-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-darwin.default (push) Has been cancelled
Nix / Build x86_64-darwin.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.default (push) Has been cancelled
Nix / Build x86_64-linux.proof-proverif (push) Has been cancelled
Nix / Build x86_64-linux.release-package (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build aarch64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Has been cancelled
2024-11-30 16:30:48 +01:00
Karolin Varner
08514d69e5 feat: Expand Rosenpass unix socket API documentation 2024-11-30 16:17:56 +01:00
Karolin Varner
baf2d68070 build(deps): bump mio from 1.0.2 to 1.0.3 (#511) 2024-11-30 14:34:43 +01:00
dependabot[bot]
cc7f7a4b4d build(deps): bump mio from 1.0.2 to 1.0.3
Bumps [mio](https://github.com/tokio-rs/mio) from 1.0.2 to 1.0.3.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v1.0.2...v1.0.3)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-30 14:34:20 +01:00
Karolin Varner
5b701631b5 build(deps): bump libc from 0.2.166 to 0.2.167 (#510) 2024-11-30 14:34:12 +01:00
dependabot[bot]
402158b706 build(deps): bump libc from 0.2.166 to 0.2.167
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.166 to 0.2.167.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.167/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.166...0.2.167)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-30 14:34:05 +01:00
Karolin Varner
e95636bf70 build(deps): bump postcard from 1.1.0 to 1.1.1 (#509) 2024-11-30 14:33:55 +01:00
dependabot[bot]
744e2bcf3e build(deps): bump postcard from 1.1.0 to 1.1.1
Bumps [postcard](https://github.com/jamesmunns/postcard) from 1.1.0 to 1.1.1.
- [Release notes](https://github.com/jamesmunns/postcard/releases)
- [Changelog](https://github.com/jamesmunns/postcard/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jamesmunns/postcard/compare/postcard/v1.1.0...postcard/v1.1.1)

---
updated-dependencies:
- dependency-name: postcard
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-29 23:31:34 +00:00
Karolin Varner
8c82ca18fb fix: API should be disabled by default
Some checks are pending
Nix / Run Nix checks on x86_64-darwin (push) Waiting to run
Nix / Build x86_64-linux.default (push) Blocked by required conditions
Nix / Build x86_64-linux.proof-proverif (push) Blocked by required conditions
Nix / Build x86_64-linux.proverif-patched (push) Waiting to run
Nix / Build x86_64-linux.release-package (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rp (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build aarch64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass-static (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.whitepaper (push) Waiting to run
Nix / Run Nix checks on x86_64-linux (push) Waiting to run
Nix / Upload whitepaper x86_64-linux (push) Waiting to run
QC / Shellcheck (push) Waiting to run
QC / Rust Format (push) Waiting to run
QC / cargo-bench (push) Waiting to run
QC / mandoc (push) Waiting to run
QC / cargo-audit (push) Waiting to run
QC / prettier (push) Waiting to run
QC / cargo-clippy (push) Waiting to run
QC / cargo-doc (push) Waiting to run
QC / cargo-test (macos-13) (push) Waiting to run
QC / cargo-test (ubuntu-latest) (push) Waiting to run
QC / cargo-test-nix-devshell-x86_64-linux (push) Waiting to run
QC / cargo-fuzz (push) Waiting to run
QC / codecov (push) Waiting to run
Regressions / multi-peer (push) Waiting to run
Regressions / boot-race (push) Waiting to run
2024-11-29 18:42:15 +01:00
Karolin Varner
208e79c3a7 build(deps): bump postcard from 1.0.10 to 1.1.0 (#507)
Some checks are pending
Nix / Build x86_64-linux.default (push) Blocked by required conditions
Nix / Build x86_64-linux.proof-proverif (push) Blocked by required conditions
Nix / Build x86_64-linux.proverif-patched (push) Waiting to run
Nix / Build x86_64-linux.release-package (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rp (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build aarch64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass-static (push) Waiting to run
Nix / Build x86_64-linux.rp-static (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.whitepaper (push) Waiting to run
Nix / Run Nix checks on x86_64-linux (push) Waiting to run
Nix / Upload whitepaper x86_64-linux (push) Waiting to run
QC / cargo-audit (push) Waiting to run
QC / cargo-clippy (push) Waiting to run
QC / prettier (push) Waiting to run
QC / Shellcheck (push) Waiting to run
QC / Rust Format (push) Waiting to run
QC / cargo-bench (push) Waiting to run
QC / mandoc (push) Waiting to run
QC / cargo-doc (push) Waiting to run
QC / cargo-test (macos-13) (push) Waiting to run
QC / cargo-test (ubuntu-latest) (push) Waiting to run
QC / cargo-test-nix-devshell-x86_64-linux (push) Waiting to run
QC / cargo-fuzz (push) Waiting to run
QC / codecov (push) Waiting to run
Regressions / multi-peer (push) Waiting to run
Regressions / boot-race (push) Waiting to run
2024-11-29 08:50:55 +01:00
dependabot[bot]
6ee023c9e9 build(deps): bump postcard from 1.0.10 to 1.1.0
Bumps [postcard](https://github.com/jamesmunns/postcard) from 1.0.10 to 1.1.0.
- [Release notes](https://github.com/jamesmunns/postcard/releases)
- [Changelog](https://github.com/jamesmunns/postcard/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jamesmunns/postcard/compare/v1.0.10...postcard/v1.1.0)

---
updated-dependencies:
- dependency-name: postcard
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 23:12:57 +00:00
Karolin Varner
6f75d34934 build(deps): bump tempfile from 3.13.0 to 3.14.0 (#489)
Some checks are pending
Nix / Build x86_64-linux.default (push) Blocked by required conditions
Nix / Build x86_64-linux.proof-proverif (push) Blocked by required conditions
Nix / Build x86_64-linux.proverif-patched (push) Waiting to run
Nix / Build x86_64-linux.release-package (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rp (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build aarch64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass-static (push) Waiting to run
Nix / Build x86_64-linux.rp-static (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.whitepaper (push) Waiting to run
Nix / Run Nix checks on x86_64-linux (push) Waiting to run
Nix / Upload whitepaper x86_64-linux (push) Waiting to run
QC / prettier (push) Waiting to run
QC / Shellcheck (push) Waiting to run
QC / Rust Format (push) Waiting to run
QC / cargo-bench (push) Waiting to run
QC / mandoc (push) Waiting to run
QC / cargo-audit (push) Waiting to run
QC / cargo-clippy (push) Waiting to run
QC / cargo-doc (push) Waiting to run
QC / cargo-test (macos-13) (push) Waiting to run
QC / cargo-test (ubuntu-latest) (push) Waiting to run
QC / cargo-test-nix-devshell-x86_64-linux (push) Waiting to run
QC / cargo-fuzz (push) Waiting to run
QC / codecov (push) Waiting to run
Regressions / multi-peer (push) Waiting to run
Regressions / boot-race (push) Waiting to run
2024-11-28 21:13:59 +01:00
dependabot[bot]
6b364a35dc build(deps): bump tempfile from 3.13.0 to 3.14.0
Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.13.0 to 3.14.0.
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Stebalien/tempfile/compare/v3.13.0...v3.14.0)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 21:13:46 +01:00
Karolin Varner
2b6d10f0aa build(deps): bump clap from 4.5.20 to 4.5.21 (#494) 2024-11-28 21:13:37 +01:00
dependabot[bot]
cb380b89d1 build(deps): bump clap from 4.5.20 to 4.5.21
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.20 to 4.5.21.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.20...clap_complete-v4.5.21)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 21:13:12 +01:00
Karolin Varner
f703933e7f build(deps): bump clap_complete from 4.5.37 to 4.5.38 (#495) 2024-11-28 21:13:05 +01:00
dependabot[bot]
d02a5d2eb7 build(deps): bump clap_complete from 4.5.37 to 4.5.38
Bumps [clap_complete](https://github.com/clap-rs/clap) from 4.5.37 to 4.5.38.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.37...clap_complete-v4.5.38)

---
updated-dependencies:
- dependency-name: clap_complete
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 21:12:49 +01:00
Karolin Varner
c7273e6f88 build(deps): bump codecov/codecov-action from 4 to 5 (#497) 2024-11-28 21:12:21 +01:00
dependabot[bot]
85eca49a5b build(deps): bump codecov/codecov-action from 4 to 5
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4 to 5.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 21:11:19 +01:00
dependabot[bot]
9943f1336b build(deps): bump rustix from 0.38.40 to 0.38.41
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.40 to 0.38.41.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Changelog](https://github.com/bytecodealliance/rustix/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.40...v0.38.41)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-28 21:11:08 +01:00
Karolin Varner
bb2a0732cc refactor: replace rustix with std where possible
Merge pull request #490 from mkroening/rustix
2024-11-28 21:01:34 +01:00
Karolin Varner
1275b992a0 Merge branch 'main' into rustix 2024-11-28 21:01:07 +01:00
Karolin Varner
196767964f Fix docstring warnings
Merge pull request #479 from aparcar/docstrings
2024-11-28 20:59:53 +01:00
Karolin Varner
d4e9770fe6 Merge branch 'main' into docstrings 2024-11-28 20:59:31 +01:00
Karolin Varner
8e2f6991d1 Rename mio.connection.shoud_close (typo in function name)
Merge pull request #501 from PD3P/mio-connection-typo
2024-11-28 20:58:07 +01:00
dependabot[bot]
af0db88939 build(deps): bump libc from 0.2.165 to 0.2.166 (#505)
Some checks are pending
Nix / Build x86_64-linux.default (push) Blocked by required conditions
Nix / Build x86_64-linux.proof-proverif (push) Blocked by required conditions
Nix / Build x86_64-linux.proverif-patched (push) Waiting to run
Nix / Build x86_64-linux.release-package (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rp (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build aarch64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass-static (push) Waiting to run
Nix / Build x86_64-linux.rp-static (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.whitepaper (push) Waiting to run
Nix / Run Nix checks on x86_64-linux (push) Waiting to run
Nix / Upload whitepaper x86_64-linux (push) Waiting to run
QC / prettier (push) Waiting to run
QC / Shellcheck (push) Waiting to run
QC / Rust Format (push) Waiting to run
QC / cargo-bench (push) Waiting to run
QC / mandoc (push) Waiting to run
QC / cargo-audit (push) Waiting to run
QC / cargo-clippy (push) Waiting to run
QC / cargo-doc (push) Waiting to run
QC / cargo-test (macos-13) (push) Waiting to run
QC / cargo-test (ubuntu-latest) (push) Waiting to run
QC / cargo-test-nix-devshell-x86_64-linux (push) Waiting to run
QC / cargo-fuzz (push) Waiting to run
QC / codecov (push) Waiting to run
Regressions / multi-peer (push) Waiting to run
Regressions / boot-race (push) Waiting to run
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.165 to 0.2.166.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.166/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.165...0.2.166)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-28 19:00:49 +01:00
dependabot[bot]
6601742903 build(deps): bump libc from 0.2.162 to 0.2.165 (#503)
Some checks failed
QC / cargo-bench (push) Has been cancelled
QC / mandoc (push) Has been cancelled
QC / cargo-audit (push) Has been cancelled
QC / cargo-clippy (push) Has been cancelled
QC / cargo-doc (push) Has been cancelled
QC / cargo-test (macos-13) (push) Has been cancelled
QC / cargo-test (ubuntu-latest) (push) Has been cancelled
Nix / Build aarch64-linux.rp (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-static (push) Has been cancelled
Nix / Build x86_64-linux.rp-static (push) Has been cancelled
Nix / Build x86_64-linux.whitepaper (push) Has been cancelled
Nix / Run Nix checks on x86_64-linux (push) Has been cancelled
Nix / Upload whitepaper x86_64-linux (push) Has been cancelled
QC / prettier (push) Has been cancelled
QC / cargo-test-nix-devshell-x86_64-linux (push) Has been cancelled
QC / cargo-fuzz (push) Has been cancelled
QC / codecov (push) Has been cancelled
Regressions / multi-peer (push) Has been cancelled
Regressions / boot-race (push) Has been cancelled
Nix / Build i686-linux.default (push) Has been cancelled
Nix / Build i686-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-darwin.default (push) Has been cancelled
Nix / Build x86_64-darwin.release-package (push) Has been cancelled
Nix / Build x86_64-darwin.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.default (push) Has been cancelled
Nix / Build x86_64-linux.proof-proverif (push) Has been cancelled
Nix / Build x86_64-linux.release-package (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build aarch64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Has been cancelled
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.162 to 0.2.165.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.165/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.162...0.2.165)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-26 20:54:02 +01:00
Philipp Dresselmann
9436281350 Docs: Add cargo test arguments in CONTRIBUTING.md (#502)
Some checks failed
Nix / Build x86_64-linux.rp-static (push) Has been cancelled
Nix / Build x86_64-linux.whitepaper (push) Has been cancelled
Nix / Run Nix checks on x86_64-linux (push) Has been cancelled
Nix / Upload whitepaper x86_64-linux (push) Has been cancelled
QC / prettier (push) Has been cancelled
QC / Shellcheck (push) Has been cancelled
QC / Rust Format (push) Has been cancelled
QC / cargo-bench (push) Has been cancelled
QC / mandoc (push) Has been cancelled
QC / cargo-audit (push) Has been cancelled
QC / cargo-clippy (push) Has been cancelled
QC / cargo-doc (push) Has been cancelled
QC / cargo-test (macos-13) (push) Has been cancelled
QC / cargo-test (ubuntu-latest) (push) Has been cancelled
QC / cargo-test-nix-devshell-x86_64-linux (push) Has been cancelled
QC / cargo-fuzz (push) Has been cancelled
QC / codecov (push) Has been cancelled
Regressions / multi-peer (push) Has been cancelled
Regressions / boot-race (push) Has been cancelled
Nix / Build i686-linux.default (push) Has been cancelled
Nix / Build i686-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-darwin.default (push) Has been cancelled
Nix / Build x86_64-darwin.release-package (push) Has been cancelled
Nix / Build x86_64-darwin.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.default (push) Has been cancelled
Nix / Build x86_64-linux.proof-proverif (push) Has been cancelled
Nix / Build x86_64-linux.release-package (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build aarch64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Has been cancelled
Presumably, this should match the command used in the CI workflow and not skip any features?
2024-11-25 14:52:51 +01:00
Philipp Dresselmann
f3399907b9 chore(API): Rename mio.connection.shoud_close
Technically a breaking change... Hopefully that's not a problem here?
2024-11-22 09:43:33 +01:00
dependabot[bot]
0cea8c5eff build(deps): bump rustix from 0.38.39 to 0.38.40
Some checks failed
Nix / Build x86_64-linux.rosenpass-static (push) Has been cancelled
Nix / Build x86_64-linux.rp-static (push) Has been cancelled
Nix / Build x86_64-linux.whitepaper (push) Has been cancelled
Nix / Run Nix checks on x86_64-linux (push) Has been cancelled
Nix / Upload whitepaper x86_64-linux (push) Has been cancelled
QC / prettier (push) Has been cancelled
QC / Shellcheck (push) Has been cancelled
QC / Rust Format (push) Has been cancelled
QC / cargo-bench (push) Has been cancelled
QC / mandoc (push) Has been cancelled
QC / cargo-audit (push) Has been cancelled
QC / cargo-clippy (push) Has been cancelled
QC / cargo-doc (push) Has been cancelled
QC / cargo-test (macos-13) (push) Has been cancelled
QC / cargo-test (ubuntu-latest) (push) Has been cancelled
QC / cargo-test-nix-devshell-x86_64-linux (push) Has been cancelled
QC / cargo-fuzz (push) Has been cancelled
QC / codecov (push) Has been cancelled
Regressions / boot-race (push) Has been cancelled
Nix / Build i686-linux.default (push) Has been cancelled
Nix / Build i686-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-darwin.default (push) Has been cancelled
Nix / Build x86_64-darwin.release-package (push) Has been cancelled
Nix / Build x86_64-darwin.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.default (push) Has been cancelled
Nix / Build x86_64-linux.proof-proverif (push) Has been cancelled
Nix / Build x86_64-linux.release-package (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build aarch64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Has been cancelled
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.39 to 0.38.40.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Changelog](https://github.com/bytecodealliance/rustix/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.39...v0.38.40)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-14 11:33:23 +01:00
dependabot[bot]
5b3f4da23e build(deps): bump serde from 1.0.214 to 1.0.215
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.214 to 1.0.215.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.214...v1.0.215)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-14 11:33:08 +01:00
dependabot[bot]
c13badb697 build(deps): bump thiserror from 1.0.68 to 1.0.69
Some checks are pending
Nix / Build x86_64-linux.default (push) Blocked by required conditions
Nix / Build x86_64-linux.proof-proverif (push) Blocked by required conditions
Nix / Build x86_64-linux.proverif-patched (push) Waiting to run
Nix / Build x86_64-linux.release-package (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rp (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build aarch64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass-static (push) Waiting to run
Nix / Build x86_64-linux.rp-static (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.whitepaper (push) Waiting to run
Nix / Run Nix checks on x86_64-linux (push) Waiting to run
Nix / Upload whitepaper x86_64-linux (push) Waiting to run
QC / Rust Format (push) Waiting to run
QC / cargo-bench (push) Waiting to run
QC / mandoc (push) Waiting to run
QC / cargo-audit (push) Waiting to run
QC / prettier (push) Waiting to run
QC / Shellcheck (push) Waiting to run
QC / cargo-clippy (push) Waiting to run
QC / cargo-doc (push) Waiting to run
QC / cargo-test (macos-13) (push) Waiting to run
QC / cargo-test (ubuntu-latest) (push) Waiting to run
QC / cargo-test-nix-devshell-x86_64-linux (push) Waiting to run
QC / cargo-fuzz (push) Waiting to run
QC / codecov (push) Waiting to run
Regressions / multi-peer (push) Waiting to run
Regressions / boot-race (push) Waiting to run
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.68 to 1.0.69.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.68...1.0.69)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-13 14:06:25 +01:00
dependabot[bot]
cc7757a0db build(deps): bump serial_test from 3.1.1 to 3.2.0
Some checks are pending
Nix / Build i686-linux.default (push) Blocked by required conditions
Nix / Build i686-linux.rosenpass (push) Waiting to run
Nix / Build i686-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Run Nix checks on i686-linux (push) Waiting to run
Nix / Build x86_64-darwin.default (push) Blocked by required conditions
Nix / Build x86_64-darwin.release-package (push) Blocked by required conditions
Nix / Build x86_64-darwin.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rp (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build aarch64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass-static (push) Waiting to run
Nix / Build x86_64-linux.rp-static (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Blocked by required conditions
Nix / Upload whitepaper x86_64-linux (push) Waiting to run
QC / cargo-bench (push) Waiting to run
QC / mandoc (push) Waiting to run
QC / cargo-audit (push) Waiting to run
QC / cargo-clippy (push) Waiting to run
QC / cargo-doc (push) Waiting to run
QC / cargo-test (macos-13) (push) Waiting to run
QC / cargo-test (ubuntu-latest) (push) Waiting to run
QC / prettier (push) Waiting to run
QC / Shellcheck (push) Waiting to run
QC / Rust Format (push) Waiting to run
QC / cargo-test-nix-devshell-x86_64-linux (push) Waiting to run
QC / cargo-fuzz (push) Waiting to run
QC / codecov (push) Waiting to run
Regressions / multi-peer (push) Waiting to run
Regressions / boot-race (push) Waiting to run
Bumps [serial_test](https://github.com/palfrey/serial_test) from 3.1.1 to 3.2.0.
- [Release notes](https://github.com/palfrey/serial_test/releases)
- [Commits](https://github.com/palfrey/serial_test/compare/v3.1.1...v3.2.0)

---
updated-dependencies:
- dependency-name: serial_test
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-13 11:48:07 +01:00
Paul Spooren
d267916445 docs(cli): Improve help text
This commit does multiple things at once to improve the user experience:
* Always start with an upper case letter, no mixing
* Hide deprecated `keygen` command, it still works if called
* Extend and rework some documentation textx
* Drop false `log_level` text, it contains a logic error
* Wrap all documentation at 80 chars

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-13 11:36:16 +01:00
Martin Kröning
03bc89a582 build(rosenpass): only enable rustix for experimental API
Signed-off-by: Martin Kröning <martin.kroening@eonerc.rwth-aachen.de>
2024-11-11 11:14:33 +01:00
Martin Kröning
19b31bcdf0 refactor(mio): close FDs via std instead of rustix
Signed-off-by: Martin Kröning <martin.kroening@eonerc.rwth-aachen.de>
2024-11-11 11:14:33 +01:00
Martin Kröning
939d216027 refactor: import FD traits from std instead of rustix
Signed-off-by: Martin Kröning <martin.kroening@eonerc.rwth-aachen.de>
2024-11-11 11:14:33 +01:00
Paul Spooren
05fbaff2dc docs(to): fix docstrings and add examples
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 12:49:10 +01:00
Paul Spooren
1d1c0e9da7 chore(examples): add examples to docs
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
e19b724673 docs(typenum): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
f879ad5020 docs(result): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
29e7087cb5 docs(mem): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
637a08d222 docs(io): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
6416c247f4 docs(fd): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
4b3b7e41e4 docs(util): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
325fb915f0 docs(result): add docstring and examples
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
43cb0c09c5 docs(length_prefix_encoding): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
0836a2eb28 docs(zerocopy): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
ca7df013d5 docs(option): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
1209d68718 docs(zeroize): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
Paul Spooren
8806494899 docs(mio): fix docstring warnings
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-08 11:17:43 +01:00
dependabot[bot]
582d27351a build(deps): bump libfuzzer-sys from 0.4.7 to 0.4.8
Some checks are pending
Nix / Build i686-linux.default (push) Blocked by required conditions
Nix / Build i686-linux.rosenpass (push) Waiting to run
Nix / Build i686-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Run Nix checks on i686-linux (push) Waiting to run
Nix / Build x86_64-darwin.default (push) Blocked by required conditions
Nix / Build x86_64-darwin.release-package (push) Blocked by required conditions
Nix / Build x86_64-darwin.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rp (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build aarch64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass-static (push) Waiting to run
Nix / Build x86_64-linux.rp-static (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Blocked by required conditions
Nix / Upload whitepaper x86_64-linux (push) Waiting to run
QC / Shellcheck (push) Waiting to run
QC / Rust Format (push) Waiting to run
QC / cargo-bench (push) Waiting to run
QC / mandoc (push) Waiting to run
QC / cargo-audit (push) Waiting to run
QC / cargo-clippy (push) Waiting to run
QC / cargo-doc (push) Waiting to run
QC / cargo-test (macos-13) (push) Waiting to run
QC / cargo-test (ubuntu-latest) (push) Waiting to run
QC / prettier (push) Waiting to run
QC / cargo-test-nix-devshell-x86_64-linux (push) Waiting to run
QC / cargo-fuzz (push) Waiting to run
QC / codecov (push) Waiting to run
Regressions / multi-peer (push) Waiting to run
Regressions / boot-race (push) Waiting to run
Bumps [libfuzzer-sys](https://github.com/rust-fuzz/libfuzzer) from 0.4.7 to 0.4.8.
- [Changelog](https://github.com/rust-fuzz/libfuzzer/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rust-fuzz/libfuzzer/compare/0.4.7...0.4.8)

---
updated-dependencies:
- dependency-name: libfuzzer-sys
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-08 10:50:37 +01:00
dependabot[bot]
61136d79eb build(deps): bump tokio from 1.41.0 to 1.41.1
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.41.0 to 1.41.1.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.41.0...tokio-1.41.1)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-08 10:50:27 +01:00
dependabot[bot]
71bd406201 build(deps): bump libc from 0.2.161 to 0.2.162
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.161 to 0.2.162.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.162/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.161...0.2.162)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-08 10:50:17 +01:00
Paul Spooren
ce63cf534a Merge pull request #485 from rosenpass/dependabot/github_actions/actions/checkout-4
build(deps): bump actions/checkout from 3 to 4
2024-11-08 10:47:58 +01:00
dependabot[bot]
d3ff19bdb9 build(deps): bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-07 23:45:49 +00:00
Paul Spooren
3b6d0822d6 Merge pull request #468 from aparcar/hello-config
Some checks are pending
Nix / Run Nix checks on x86_64-darwin (push) Waiting to run
Nix / Build x86_64-linux.default (push) Blocked by required conditions
Nix / Build x86_64-linux.proof-proverif (push) Blocked by required conditions
Nix / Build x86_64-linux.proverif-patched (push) Waiting to run
Nix / Build x86_64-linux.release-package (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rp (push) Waiting to run
Nix / Build aarch64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass-static (push) Waiting to run
Nix / Build x86_64-linux.rp-static (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.whitepaper (push) Waiting to run
Nix / Run Nix checks on x86_64-linux (push) Waiting to run
Nix / Upload whitepaper x86_64-linux (push) Waiting to run
QC / cargo-test (macos-13) (push) Waiting to run
QC / cargo-test (ubuntu-latest) (push) Waiting to run
QC / prettier (push) Waiting to run
QC / Shellcheck (push) Waiting to run
QC / Rust Format (push) Waiting to run
QC / cargo-bench (push) Waiting to run
QC / mandoc (push) Waiting to run
QC / cargo-audit (push) Waiting to run
QC / cargo-clippy (push) Waiting to run
QC / cargo-doc (push) Waiting to run
QC / cargo-test-nix-devshell-x86_64-linux (push) Waiting to run
QC / cargo-fuzz (push) Waiting to run
QC / codecov (push) Waiting to run
Regressions / multi-peer (push) Waiting to run
Regressions / boot-race (push) Waiting to run
2024-11-07 15:14:00 +01:00
Paul Spooren
533afea129 Merge pull request #453 from aparcar/boot_race 2024-11-07 15:13:38 +01:00
Paul Spooren
da5b281b96 ci: add regression test for boot race condition
If two instances start up at the same time, they end up with different
keys on both ends. Test this with different delays of 2 (working), 1
(flaky) and 0 (broken) seconds.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-07 14:38:31 +01:00
Paul Spooren
b9e873e534 feat(config): Implenent todos from validate function
Readability of public/secret keys can be checked by simply loading the
key and thereby also checking that it's actually valid.

A user should either define `key_out` or a valid WireGuard peer (made of
`device` and `peer`). If neither is defined, let the user know that this
function will never do any good.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-07 14:34:57 +01:00
dependabot[bot]
a3b339b180 build(deps): bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-07 14:33:23 +01:00
Paul Spooren
b4347c1382 feat(cli): Print downstream error of config validation
The incredible helpful error message would never reach the enduser.
Attach it to upper layer print to help users fix the issues.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-07 14:23:46 +01:00
Paul Spooren
0745019e10 docs(cli): Create commented config file
The previous `gen-config` output contained no comments and was partly
misleading, i.e. the `pre_shared_key` is actually a path and not the
key itself. Mark things that are optional.

To keep things in sync, add a test that verifies that the configuration
is actually valid.

While at it, use 127.0.0.1 as peer address instead a fictitious domain
which would break the tests.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-07 14:23:46 +01:00
dependabot[bot]
2369006342 build(deps): bump actionsx/prettier from 2 to 3
Bumps [actionsx/prettier](https://github.com/actionsx/prettier) from 2 to 3.
- [Release notes](https://github.com/actionsx/prettier/releases)
- [Commits](https://github.com/actionsx/prettier/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actionsx/prettier
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-07 14:17:32 +01:00
dependabot[bot]
0fa6176d06 build(deps): bump arbitrary from 1.3.2 to 1.4.1
Some checks are pending
Nix / Run Nix checks on x86_64-darwin (push) Waiting to run
Nix / Build x86_64-linux.default (push) Blocked by required conditions
Nix / Build x86_64-linux.proof-proverif (push) Blocked by required conditions
Nix / Build x86_64-linux.proverif-patched (push) Waiting to run
Nix / Build x86_64-linux.release-package (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rosenpass (push) Waiting to run
Nix / Build aarch64-linux.rp (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build aarch64-linux.rosenpass-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.rosenpass-static (push) Waiting to run
Nix / Build x86_64-linux.rp-static (push) Waiting to run
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Blocked by required conditions
Nix / Build x86_64-linux.whitepaper (push) Waiting to run
Nix / Run Nix checks on x86_64-linux (push) Waiting to run
Nix / Upload whitepaper x86_64-linux (push) Waiting to run
QC / prettier (push) Waiting to run
QC / Shellcheck (push) Waiting to run
QC / Rust Format (push) Waiting to run
QC / cargo-bench (push) Waiting to run
QC / mandoc (push) Waiting to run
QC / cargo-audit (push) Waiting to run
QC / cargo-clippy (push) Waiting to run
QC / cargo-doc (push) Waiting to run
QC / cargo-test (macos-13) (push) Waiting to run
QC / cargo-test (ubuntu-latest) (push) Waiting to run
QC / cargo-test-nix-devshell-x86_64-linux (push) Waiting to run
QC / cargo-fuzz (push) Waiting to run
QC / codecov (push) Waiting to run
Regressions / multi-peer (push) Waiting to run
Bumps [arbitrary](https://github.com/rust-fuzz/arbitrary) from 1.3.2 to 1.4.1.
- [Changelog](https://github.com/rust-fuzz/arbitrary/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rust-fuzz/arbitrary/compare/v1.3.2...v1.4.1)

---
updated-dependencies:
- dependency-name: arbitrary
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 16:17:02 +01:00
dependabot[bot]
22bdeaf8f1 build(deps): bump anyhow from 1.0.91 to 1.0.93
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.91 to 1.0.93.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.91...1.0.93)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:44:55 +01:00
dependabot[bot]
5731272844 build(deps): bump actions/cache from 3 to 4
Bumps [actions/cache](https://github.com/actions/cache) from 3 to 4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:13:43 +01:00
dependabot[bot]
bc7cef9de0 build(deps): bump peaceiris/actions-gh-pages from 3 to 4
Bumps [peaceiris/actions-gh-pages](https://github.com/peaceiris/actions-gh-pages) from 3 to 4.
- [Release notes](https://github.com/peaceiris/actions-gh-pages/releases)
- [Changelog](https://github.com/peaceiris/actions-gh-pages/blob/main/CHANGELOG.md)
- [Commits](https://github.com/peaceiris/actions-gh-pages/compare/v3...v4)

---
updated-dependencies:
- dependency-name: peaceiris/actions-gh-pages
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:13:22 +01:00
dependabot[bot]
4cdcc35c3e build(deps): bump cachix/install-nix-action from 21 to 30
Bumps [cachix/install-nix-action](https://github.com/cachix/install-nix-action) from 21 to 30.
- [Release notes](https://github.com/cachix/install-nix-action/releases)
- [Commits](https://github.com/cachix/install-nix-action/compare/v21...v30)

---
updated-dependencies:
- dependency-name: cachix/install-nix-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:12:58 +01:00
dependabot[bot]
a8f1292cbf build(deps): bump cachix/cachix-action from 12 to 15
Bumps [cachix/cachix-action](https://github.com/cachix/cachix-action) from 12 to 15.
- [Release notes](https://github.com/cachix/cachix-action/releases)
- [Commits](https://github.com/cachix/cachix-action/compare/v12...v15)

---
updated-dependencies:
- dependency-name: cachix/cachix-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:12:38 +01:00
dependabot[bot]
ae5c5ed2b4 build(deps): bump softprops/action-gh-release from 1 to 2
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 1 to 2.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](https://github.com/softprops/action-gh-release/compare/v1...v2)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 15:12:11 +01:00
Paul Spooren
c483452a6a ci(dependabot): check for GitHub action updates
We already use Dependabot for cargo updates, use it for GitHub action
updates, too. Right now we see warnings every now and then because Node
wants another upgrade or some checkout stuff is about to be deprecated.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-11-06 13:43:09 +01:00
dependabot[bot]
4ce331d299 build(deps): bump serde from 1.0.213 to 1.0.214
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.213 to 1.0.214.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.213...v1.0.214)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 13:29:18 +01:00
dependabot[bot]
d81eb7e2ed build(deps): bump thiserror from 1.0.65 to 1.0.68
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.65 to 1.0.68.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.65...1.0.68)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 13:22:56 +01:00
dependabot[bot]
61043500ba build(deps): bump rustix from 0.38.37 to 0.38.39
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.37 to 0.38.39.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Changelog](https://github.com/bytecodealliance/rustix/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.37...v0.38.39)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 13:22:15 +01:00
dependabot[bot]
9c4752559d build(deps): bump clap_complete from 4.5.35 to 4.5.37
Bumps [clap_complete](https://github.com/clap-rs/clap) from 4.5.35 to 4.5.37.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.35...clap_complete-v4.5.37)

---
updated-dependencies:
- dependency-name: clap_complete
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-06 10:39:38 +01:00
dependabot[bot]
6aec7acdb8 build(deps): bump clap_complete from 4.5.29 to 4.5.35
Some checks failed
Nix / Build x86_64-linux.rosenpass-static (push) Has been cancelled
Nix / Build x86_64-linux.rp-static (push) Has been cancelled
Nix / Build x86_64-linux.whitepaper (push) Has been cancelled
Nix / Run Nix checks on x86_64-linux (push) Has been cancelled
Nix / Upload whitepaper x86_64-linux (push) Has been cancelled
QC / prettier (push) Has been cancelled
QC / Shellcheck (push) Has been cancelled
QC / Rust Format (push) Has been cancelled
QC / cargo-bench (push) Has been cancelled
QC / mandoc (push) Has been cancelled
QC / cargo-audit (push) Has been cancelled
QC / cargo-clippy (push) Has been cancelled
QC / cargo-doc (push) Has been cancelled
QC / cargo-test (macos-13) (push) Has been cancelled
QC / cargo-test (ubuntu-latest) (push) Has been cancelled
QC / cargo-test-nix-devshell-x86_64-linux (push) Has been cancelled
QC / cargo-fuzz (push) Has been cancelled
QC / codecov (push) Has been cancelled
Regressions / multi-peer (push) Has been cancelled
Nix / Build i686-linux.default (push) Has been cancelled
Nix / Build i686-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-darwin.default (push) Has been cancelled
Nix / Build x86_64-darwin.release-package (push) Has been cancelled
Nix / Build x86_64-darwin.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.default (push) Has been cancelled
Nix / Build x86_64-linux.proof-proverif (push) Has been cancelled
Nix / Build x86_64-linux.release-package (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build aarch64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Has been cancelled
Bumps [clap_complete](https://github.com/clap-rs/clap) from 4.5.29 to 4.5.35.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.29...clap_complete-v4.5.35)

---
updated-dependencies:
- dependency-name: clap_complete
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-28 12:22:13 +01:00
dependabot[bot]
337cc1b4b4 build(deps): bump clap_mangen from 0.2.23 to 0.2.24
Bumps [clap_mangen](https://github.com/clap-rs/clap) from 0.2.23 to 0.2.24.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_mangen-v0.2.23...clap_mangen-v0.2.24)

---
updated-dependencies:
- dependency-name: clap_mangen
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-28 11:18:35 +01:00
Karolin Varner
387a266a49 chore: Dependency updates
Some checks failed
Nix / Build x86_64-linux.rosenpass-static (push) Has been cancelled
Nix / Build x86_64-linux.rp-static (push) Has been cancelled
Nix / Build x86_64-linux.whitepaper (push) Has been cancelled
Nix / Run Nix checks on x86_64-linux (push) Has been cancelled
Nix / Upload whitepaper x86_64-linux (push) Has been cancelled
QC / prettier (push) Has been cancelled
QC / Shellcheck (push) Has been cancelled
QC / Rust Format (push) Has been cancelled
QC / cargo-bench (push) Has been cancelled
QC / mandoc (push) Has been cancelled
QC / cargo-audit (push) Has been cancelled
QC / cargo-clippy (push) Has been cancelled
QC / cargo-doc (push) Has been cancelled
QC / cargo-test (macos-13) (push) Has been cancelled
QC / cargo-test (ubuntu-latest) (push) Has been cancelled
QC / cargo-test-nix-devshell-x86_64-linux (push) Has been cancelled
QC / cargo-fuzz (push) Has been cancelled
QC / codecov (push) Has been cancelled
Regressions / multi-peer (push) Has been cancelled
Nix / Build i686-linux.default (push) Has been cancelled
Nix / Build i686-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-darwin.default (push) Has been cancelled
Nix / Build x86_64-darwin.release-package (push) Has been cancelled
Nix / Build x86_64-darwin.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.default (push) Has been cancelled
Nix / Build x86_64-linux.proof-proverif (push) Has been cancelled
Nix / Build x86_64-linux.release-package (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build aarch64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Has been cancelled
Merge branch 'dev/karo/updates'
2024-10-24 17:30:52 +02:00
dependabot[bot]
179970b905 build(deps): bump thiserror from 1.0.64 to 1.0.65
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.64 to 1.0.65.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.64...1.0.65)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-24 17:30:32 +02:00
dependabot[bot]
8b769e04c1 build(deps): bump anyhow from 1.0.89 to 1.0.91
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.89 to 1.0.91.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.89...1.0.91)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-24 17:29:48 +02:00
dependabot[bot]
810bdf5519 build(deps): bump tokio from 1.40.0 to 1.41.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.40.0 to 1.41.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.40.0...tokio-1.41.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-24 17:29:23 +02:00
dependabot[bot]
d3a666bea0 build(deps): bump serde from 1.0.210 to 1.0.213
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.210 to 1.0.213.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.210...v1.0.213)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-24 17:28:47 +02:00
Karolin Varner
2b8f780584 Unit tests & api doc
Merge pull request #458 from rosenpass/dev/karo/docs_and_unit_tests
2024-10-24 17:25:56 +02:00
Karolin Varner
6aea3c0c1f chore: Documentation and unit tests for rosenpass_util::io 2024-10-24 14:01:20 +02:00
Karolin Varner
e4fdfcae08 chore: Documentation and unit tests for rosenpass_util::functional 2024-10-24 14:01:20 +02:00
Karolin Varner
48e629fff7 feat: sideffect/mutating should take FnMut over Fn 2024-10-24 14:01:20 +02:00
Karolin Varner
6321bb36fc chore: Formatting 2024-10-24 14:01:20 +02:00
Karolin Varner
2f9ff487ba chore: Unused import 2024-10-24 14:01:20 +02:00
Karolin Varner
c0c06cd1dc chore: Wrong formatting for module doc 2024-10-24 14:01:20 +02:00
Karolin Varner
e9772effa6 chore: Documentation and unit tests for rosenpass_util::file 2024-10-24 14:01:20 +02:00
Karolin Varner
cf68f15674 chore: Documentation and unit tests for rosenpass_util::fd 2024-10-24 14:01:20 +02:00
Karolin Varner
dd5d45cdc9 chore: Documentation and unit tests for rosenpass_util::controlflow 2024-10-24 14:01:20 +02:00
Karolin Varner
17a6aed8a6 feat(cli): Automatically generate man page
Merge pull request #434 from aparcar/lil-cli-ng
2024-10-24 13:59:31 +02:00
Paul Spooren
3f9926e353 feat(cli): Automatically generate man page
Instead of using a static one, generate it via clap_mangen. To generate
the manpage run `rosenpass --generate-manpage <folder>`.

Right now clap does not support flattening of generated manpages,
meaning that each subcommand is explained in its own file. To add extra
sections to the main file `rosenpass.1`, it's rewritten after the
initial creation.

Once clap support flattened Man pages, the `generate_to` call can be
removed and all subcommand are added to the `rosenpass.1` file.

This implementation allows downstream manpage generation to stay
unchanged even after switching from multiple manpages to a flattened
one.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-22 10:06:47 +02:00
dependabot[bot]
f4ab2ac891 build(deps): bump libc from 0.2.159 to 0.2.161 (#449)
Some checks failed
QC / cargo-test (ubuntu-latest) (push) Has been cancelled
QC / cargo-test-nix-devshell-x86_64-linux (push) Has been cancelled
Nix / Build i686-linux.rosenpass (push) Has been cancelled
Nix / Run Nix checks on i686-linux (push) Has been cancelled
QC / cargo-fuzz (push) Has been cancelled
QC / codecov (push) Has been cancelled
Regressions / multi-peer (push) Has been cancelled
Nix / Build x86_64-darwin.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.default (push) Has been cancelled
Nix / Build x86_64-linux.proof-proverif (push) Has been cancelled
Nix / Build x86_64-linux.release-package (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build aarch64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Has been cancelled
Nix / Build i686-linux.default (push) Has been cancelled
Nix / Build i686-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-darwin.default (push) Has been cancelled
Nix / Build x86_64-darwin.release-package (push) Has been cancelled
Nix / Build x86_64-linux.whitepaper (push) Has been cancelled
Nix / Run Nix checks on x86_64-linux (push) Has been cancelled
Nix / Upload whitepaper x86_64-linux (push) Has been cancelled
QC / prettier (push) Has been cancelled
QC / Shellcheck (push) Has been cancelled
QC / Rust Format (push) Has been cancelled
QC / cargo-bench (push) Has been cancelled
QC / mandoc (push) Has been cancelled
QC / cargo-audit (push) Has been cancelled
QC / cargo-clippy (push) Has been cancelled
QC / cargo-doc (push) Has been cancelled
QC / cargo-test (macos-13) (push) Has been cancelled
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.159 to 0.2.161.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.161/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.159...0.2.161)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-18 19:09:05 +02:00
Karolin Varner
de51c1005f Merge pull request #447 from rosenpass/dev/update-cargolock
Some checks failed
Nix / Run Nix checks on x86_64-darwin (push) Has been cancelled
Nix / Build x86_64-linux.default (push) Has been cancelled
Nix / Build x86_64-linux.proof-proverif (push) Has been cancelled
Nix / Build x86_64-linux.proverif-patched (push) Has been cancelled
Nix / Build x86_64-linux.release-package (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass (push) Has been cancelled
Nix / Build aarch64-linux.rosenpass (push) Has been cancelled
Nix / Build aarch64-linux.rp (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build aarch64-linux.rosenpass-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-static (push) Has been cancelled
Nix / Build x86_64-linux.rp-static (push) Has been cancelled
Nix / Build x86_64-linux.rosenpass-static-oci-image (push) Has been cancelled
Nix / Build x86_64-linux.whitepaper (push) Has been cancelled
Nix / Run Nix checks on x86_64-linux (push) Has been cancelled
Nix / Upload whitepaper x86_64-linux (push) Has been cancelled
QC / prettier (push) Has been cancelled
QC / Shellcheck (push) Has been cancelled
QC / Rust Format (push) Has been cancelled
QC / cargo-bench (push) Has been cancelled
QC / mandoc (push) Has been cancelled
QC / cargo-audit (push) Has been cancelled
QC / cargo-clippy (push) Has been cancelled
QC / cargo-doc (push) Has been cancelled
QC / cargo-test (macos-13) (push) Has been cancelled
QC / cargo-test (ubuntu-latest) (push) Has been cancelled
QC / cargo-test-nix-devshell-x86_64-linux (push) Has been cancelled
QC / cargo-fuzz (push) Has been cancelled
QC / codecov (push) Has been cancelled
Regressions / multi-peer (push) Has been cancelled
chore: update Cargo.lock
2024-10-14 15:43:35 +02:00
wucke13
1e2cd589b1 chore: update Cargo.lock 2024-10-14 15:42:40 +02:00
Karolin Varner
02bc485d97 Merge pull request #446 from rosenpass/dev/karo/docs_and_unit_tests
Documentation and unit tests
2024-10-14 15:42:13 +02:00
Karolin Varner
3ae52b9824 chore: Documentation and unit tests for crate rosenpass-util::build 2024-10-13 19:22:14 +02:00
Karolin Varner
cbf361206b chore: Documentation and unit tests for crate rosenpass-util::b64 2024-10-13 17:21:30 +02:00
Karolin Varner
398da99df2 chore: Documentation and unit tests for crate rosenpass-constant-time 2024-10-13 16:58:20 +02:00
Karolin Varner
acfbb67abe chore: Documentation and unit tests for crate rosenpass-oqs 2024-10-13 16:34:50 +02:00
Karolin Varner
c407b8b006 chore(rosenpass): Set version to 0.3.0-dev
Merge pull request #436 from aparcar/0.2.2-dev
2024-10-10 11:36:14 +02:00
Paul Spooren
bc7213d8c0 chore(rosenpass): Set version to 0.3.0-dev
The latest release left the `main` branch on 0.2.1

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-10 11:35:33 +02:00
dependabot[bot]
69e68aad2c build(deps): bump clap from 4.5.19 to 4.5.20
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.19 to 4.5.20.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.19...clap_complete-v4.5.20)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-10 11:34:22 +02:00
Karolin Varner
9b07f5803b fix(rosenpass): fix compilation without API
Merge pull request #443 from mkroening/no-api
2024-10-10 11:33:33 +02:00
Martin Kröning
5ce572b739 fix(rosenpass): fix compilation without API
Signed-off-by: Martin Kröning <martin.kroening@eonerc.rwth-aachen.de>
2024-10-09 12:52:18 +02:00
wucke13
d9f8fa0092 refactor(flake.nix): externalize pkgs, add overlay
This splits the complexity of the `flake.nix` into multiple files. At
cross-compiled and static builds at the benefit of simpler nix
expressions and generally better cross compilation compatibility.
the same time, naersk is removed; causing much slower builds for cross-
compiled packages.

This partially addresses the points mentioned in #412.
2024-10-08 17:30:08 +02:00
dependabot[bot]
a5208795f6 build(deps): bump futures from 0.3.30 to 0.3.31
Bumps [futures](https://github.com/rust-lang/futures-rs) from 0.3.30 to 0.3.31.
- [Release notes](https://github.com/rust-lang/futures-rs/releases)
- [Changelog](https://github.com/rust-lang/futures-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/futures-rs/compare/0.3.30...0.3.31)

---
updated-dependencies:
- dependency-name: futures
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-08 14:27:46 +02:00
Karolin Varner
0959148305 ci: add concurrency option to skip in progress
Merge pull request #432 from aparcar/con
2024-10-03 16:48:02 +02:00
Paul Spooren
f2bc3a8b64 ci: Rename regression workflow to "Regression"
No magic here, this is likely a copy&paste error. Problem is that one
workflow being called "QC" (regressions.yml) cancels out the other "QC"
(qc.yaml).

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-03 16:47:49 +02:00
Paul Spooren
06529df2c0 ci: add concurrency option to skip in progress
Instead of running outdated CI jobs, skip them automatically.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-03 16:47:49 +02:00
Karolin Varner
128c77f77a ci: Skip Nix build of aarch64 since it takes forever
Merge pull request #433 from aparcar/no-arm-ci
2024-10-03 16:47:09 +02:00
Karolin Varner
501cc9bb05 Merge branch 'main' into no-arm-ci 2024-10-03 16:46:36 +02:00
dependabot[bot]
9ad5277a90 build(deps): bump clap from 4.5.18 to 4.5.19
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.18 to 4.5.19.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.18...clap_complete-v4.5.19)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-02 18:45:26 +02:00
Paul Spooren
0cbcaeaf98 ci: Skip Nix build of aarch64 since it takes forever
More than 6 hours aka failing the CI. Drop it for now and hope to have
it enabled later again.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-01 14:18:50 +02:00
Paul Spooren
687ef3f6f8 docs: Correct protocol retransmission unit/vars
Those are seconds not ms, also it's BEGIN not BEG.

While over there, drop the unused variable `RETRANSMIT_ABORT` which was
never used anywhere in the code and drop an outdated TODO comment.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-01 14:08:44 +02:00
Paul Spooren
b0706354d3 chore: Format all Cargo.toml files
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-10-01 11:22:45 +01:00
dependabot[bot]
c1e86daec8 build(deps): bump libc from 0.2.158 to 0.2.159 (#429)
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.158 to 0.2.159.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.159/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.158...0.2.159)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-25 20:02:31 +02:00
dependabot[bot]
18a286e688 build(deps): bump thiserror from 1.0.63 to 1.0.64 (#428)
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.63 to 1.0.64.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.63...1.0.64)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-24 17:28:42 +02:00
dependabot[bot]
cb92313391 build(deps): bump clap from 4.5.17 to 4.5.18 (#427)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.17 to 4.5.18.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.17...clap_complete-v4.5.18)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-21 09:26:54 +02:00
dependabot[bot]
5cd30b4c13 build(deps): bump anyhow from 1.0.88 to 1.0.89 (#425)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.88 to 1.0.89.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.88...1.0.89)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-17 17:41:27 +02:00
dependabot[bot]
76d8d38744 build(deps): bump rustix from 0.38.36 to 0.38.37
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.36 to 0.38.37.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Changelog](https://github.com/bytecodealliance/rustix/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.36...v0.38.37)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 19:01:42 +02:00
dependabot[bot]
f63f0bbc2e build(deps): bump anyhow from 1.0.87 to 1.0.88
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.87 to 1.0.88.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.87...1.0.88)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-12 19:01:31 +02:00
Karolin Varner
4a449e6502 chore: drop copy & paste doc error in protocol.rs
Merge pull request #422 from aparcar/cos1
2024-09-10 18:02:49 +02:00
Karolin Varner
1e6d2df004 Merge branch 'main' into cos1 2024-09-10 18:02:25 +02:00
Paul Spooren
3fa9aadda2 chore: drop copy & paste doc error in protocol.rs
There seem to be a paste typo in the docs, drop it to lower confusion.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-09-10 12:39:57 +02:00
dependabot[bot]
0c79a4ce95 build(deps): bump serde from 1.0.209 to 1.0.210 (#420)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.209 to 1.0.210.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.209...v1.0.210)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-09 16:22:00 +02:00
dependabot[bot]
036960b5b1 build(deps): bump anyhow from 1.0.86 to 1.0.87 (#421)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.86 to 1.0.87.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.86...1.0.87)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-09 15:01:01 +02:00
dependabot[bot]
e7258849cb build(deps): bump rustix from 0.38.35 to 0.38.36 (#419)
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.35 to 0.38.36.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.35...v0.38.36)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-06 09:53:51 +02:00
dependabot[bot]
8c88f68990 build(deps): bump clap from 4.5.16 to 4.5.17
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.16 to 4.5.17.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.16...clap_complete-v4.5.17)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-05 10:24:56 +02:00
dependabot[bot]
cf20536576 build(deps): bump tokio from 1.39.3 to 1.40.0
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.39.3 to 1.40.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.39.3...tokio-1.40.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-31 11:30:00 +02:00
dependabot[bot]
72e18e3ec2 build(deps): bump derive_builder from 0.20.0 to 0.20.1
Bumps [derive_builder](https://github.com/colin-kiegel/rust-derive-builder) from 0.20.0 to 0.20.1.
- [Release notes](https://github.com/colin-kiegel/rust-derive-builder/releases)
- [Commits](https://github.com/colin-kiegel/rust-derive-builder/compare/v0.20.0...v0.20.1)

---
updated-dependencies:
- dependency-name: derive_builder
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-30 08:56:28 +02:00
dependabot[bot]
6040156a0e build(deps): bump rustix from 0.38.34 to 0.38.35 (#414)
Bumps [rustix](https://github.com/bytecodealliance/rustix) from 0.38.34 to 0.38.35.
- [Release notes](https://github.com/bytecodealliance/rustix/releases)
- [Commits](https://github.com/bytecodealliance/rustix/compare/v0.38.34...v0.38.35)

---
updated-dependencies:
- dependency-name: rustix
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-28 20:39:53 +02:00
dependabot[bot]
d3b318b413 build(deps): bump stacker from 0.1.16 to 0.1.17 (#415)
Bumps [stacker](https://github.com/rust-lang/stacker) from 0.1.16 to 0.1.17.
- [Commits](https://github.com/rust-lang/stacker/compare/stacker-0.1.16...stacker-0.1.17)

---
updated-dependencies:
- dependency-name: stacker
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-28 20:39:25 +02:00
dependabot[bot]
3a49345138 build(deps): bump serde from 1.0.208 to 1.0.209 (#413)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.208 to 1.0.209.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.208...v1.0.209)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-27 20:02:59 +02:00
dependabot[bot]
4ec7813259 build(deps): bump stacker from 0.1.15 to 0.1.16
Bumps [stacker](https://github.com/rust-lang/stacker) from 0.1.15 to 0.1.16.
- [Commits](https://github.com/rust-lang/stacker/compare/stacker-0.1.15...stacker-0.1.16)

---
updated-dependencies:
- dependency-name: stacker
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-22 16:54:13 +02:00
dependabot[bot]
db31da14d3 build(deps): bump postcard from 1.0.9 to 1.0.10
Bumps [postcard](https://github.com/jamesmunns/postcard) from 1.0.9 to 1.0.10.
- [Release notes](https://github.com/jamesmunns/postcard/releases)
- [Changelog](https://github.com/jamesmunns/postcard/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jamesmunns/postcard/compare/v1.0.9...v1.0.10)

---
updated-dependencies:
- dependency-name: postcard
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-22 16:54:00 +02:00
Karolin Varner
4c20efc8a8 Merge: fix(API): Tests failing on mac
Merge pull request #409 from rosenpass/dev/karo/macos-fix
2024-08-21 13:46:53 +02:00
Karolin Varner
c81d484294 fix(API): Tests failing on mac 2024-08-21 12:48:45 +02:00
dependabot[bot]
cc578169d6 build(deps): bump postcard from 1.0.8 to 1.0.9 (#408)
Bumps [postcard](https://github.com/jamesmunns/postcard) from 1.0.8 to 1.0.9.
- [Release notes](https://github.com/jamesmunns/postcard/releases)
- [Changelog](https://github.com/jamesmunns/postcard/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jamesmunns/postcard/compare/v1.0.8...v1.0.9)

---
updated-dependencies:
- dependency-name: postcard
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-21 08:12:56 +02:00
dependabot[bot]
91527702f1 build(deps): bump tokio from 1.39.2 to 1.39.3 (#407)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.39.2 to 1.39.3.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.39.2...tokio-1.39.3)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-21 08:12:39 +02:00
dependabot[bot]
0179f1c673 build(deps): bump libc from 0.2.156 to 0.2.158 (#406)
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.156 to 0.2.158.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.158/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.156...0.2.158)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-21 08:12:05 +02:00
Karolin Varner
2238919657 Merge: fd/time: add tests, docs, cleanups
Merge pull request #405 from aparcar/fd-tests-cleanup
2024-08-19 17:52:42 +02:00
Paul Spooren
d913e19883 test: add tests for controlflow
While at it, fix the label handling and fix a typo in continue_if, where
a `break` falsely replaced a `continue`

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-19 17:24:38 +02:00
Paul Spooren
1555d0897b feat(ord): drop obsolete RTX_BUFFER_SIZE and usize_max
The RTX_BUFFER_SIZE function is nowhere used in the code and when
dropping it, usize_max (const version of max()) becomes obsolete, too.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-19 17:24:37 +02:00
Paul Spooren
abdbf8f3da feat(util/time): cleanup, document and add tests
Drop the unused `dur` function, it's nowhere found in the code.

Document both Timebase and Timebase::now()

Add tests

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-19 17:24:16 +02:00
Paul Spooren
9f78531979 tests: cleanup fd.rs tests
Trigger the internal assert of owned.rs instead of writing our own. To
correctly test it use `should_panic` macro.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-19 17:24:16 +02:00
Karolin Varner
624d8d2f44 Merge: API: Close connections after errors & use mio::Token based polling
Merge pull request #404 from rosenpass/dev/karo/api_remove_connection
2024-08-19 15:03:46 +02:00
Karolin Varner
9bbf9433e6 fix(API): Be polite and kill child processes in api integration tests 2024-08-19 00:31:01 +02:00
Karolin Varner
77760d71df feat(API): Use mio::Token based polling
Avoid polling every single IO source to collect events,
poll those specific IO sources mio tells us about.
2024-08-19 00:31:01 +02:00
Karolin Varner
53e560191f feat(API): Close API connections after error 2024-08-19 00:31:01 +02:00
Karolin Varner
93cd266c68 Merge API Endpoint: AddPskBroker
Merge pull request #403 from rosenpass/dev/karo/api-add-psk-broker
2024-08-17 22:25:21 +02:00
Karolin Varner
594f894206 feat(API): AddPskBroker endpoint 2024-08-17 15:30:10 +02:00
Karolin Varner
a831e01a5c chore: Utilities to check for unix domain stream sockets 2024-08-17 15:30:10 +02:00
dependabot[bot]
0884641d64 build(deps): bump libc from 0.2.155 to 0.2.156
Bumps [libc](https://github.com/rust-lang/libc) from 0.2.155 to 0.2.156.
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.156/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.155...0.2.156)

---
updated-dependencies:
- dependency-name: libc
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-17 10:54:06 +02:00
dependabot[bot]
ae85d0ed2b build(deps): bump clap from 4.5.15 to 4.5.16
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.15 to 4.5.16.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.15...clap_complete-v4.5.16)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-16 17:28:51 +02:00
Karolin Varner
163f66f20e Merge – API Feature: Adding listen sockets
Merge pull request #395 from rosenpass/dev/karo/api-add-listen-socket
2024-08-16 17:16:44 +02:00
Paul Spooren
3caff91515 rosenpass: fallback for empty api section in config
The [api] section is newly added and causes existing installation to
break since they lack the configuration options. Instead, use a serde
default function.

Signed-off-by: Paul Spooren <mail@aparcar.org>
Co-authored-by: Karolin Varner <karo@cupdev.net>
2024-08-16 14:37:42 +02:00
Karolin Varner
24eebe29a1 feat(API): AddListenSocket endpoint 2024-08-16 14:37:42 +02:00
Karolin Varner
1d2fa7d038 feat(api): API Feature – Add server keys via API
Merge pull request #392 from rosenpass/dev/karo/api-supply-server-keys
2024-08-16 11:22:46 +02:00
Karolin Varner
edf1e774c1 feat(API): SupplyKeypair endpoint 2024-08-16 11:13:34 +02:00
Karolin Varner
7a31b57227 chore(API): Infrastructure to use endpoints with fd. passing 2024-08-16 08:39:27 +02:00
Karolin Varner
d5a8c85abe chore(API): Specifying a keypair should be opt. at startup
…so we can specify it later using the API.
2024-08-16 08:34:07 +02:00
Karolin Varner
48f7ff93e3 chore(API, AppServer): Deal with CryptoServer being uninit.
Before this, we would just raise an error.
2024-08-16 08:34:07 +02:00
Karolin Varner
5f6c36e773 chore(AppServer): Decouple AppServer from CryptoServer::timebase 2024-08-16 08:34:07 +02:00
Karolin Varner
7b3b7612cf chore(api): API should have access to AppServer
The borrow checker does not approve, hence there are many shenanigans
with extension traits.
2024-08-16 08:34:07 +02:00
Karolin Varner
c1704b1464 fix(API): Wrong response size set 2024-08-16 08:34:07 +02:00
dependabot[bot]
2785aaf783 build(deps): bump serde from 1.0.207 to 1.0.208
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.207 to 1.0.208.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.207...v1.0.208)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-16 08:30:08 +02:00
Karolin Varner
15002a74cc Merge: Experimental PSK Broker Support
Merge pull request #376 from pqcfox/feat/netlink-broker-cli

Add broker support to Rosenpass using `MioBrokerClient` (backport of dev/broker-architecture)
2024-08-16 08:26:15 +02:00
Karolin Varner
0fe2d9825b fix: Remove ineffectual broker integration test 2024-08-16 00:35:46 +02:00
Karolin Varner
ab805dae75 fix: libc & rustix are making problems in CI for unknown reasons 2024-08-16 00:35:46 +02:00
Karolin Varner
08653c3338 chore: clippy 2024-08-16 00:35:46 +02:00
Karolin Varner
520c8c6eaa chore: Feature naming scheme fully applied
experimental_broker_api -> experiment_broker_api
2024-08-15 22:47:20 +02:00
Karolin Varner
258efe408c fix: PSK broker integration did not work
This commit resolves multiple issues with the PSK broker integration.

- The manual testing procedure never actually utilized the brokers
  due to the use of the outfile option, this led to issues with the
  broker being hidden.
- The manual testing procedure omitted checking whether a PSK was
  actually sent to WireGuard entirely. This was fixed by writing an
  entirely new manual integration testing shell-script that can serve
  as a blueprint for future integration tests.
- Many parts of the PSK broker code did not report (log) errors
  accurately; added error logging
- BrokerServer set message.payload.return_code to the msg_type value,
  this led to crashes
- The PSK broker commands all omitted to set the memfd policy, this led
  to immediate crashes once secrets where actually allocated
- The MioBrokerClient IO state machine was broken and the design was
  too obtuse to debug. The state machine returned the length prefix as
  a message instead of actually interpreting it as a state machine.
  Seems the code was integrated but never actually tested. This was
  fixed by rewriting the entire state machine code using the new
  LengthPrefixEncoder/Decoder facilities. A write-buffer that was not
  being flushed is now handled by flushing the buffer in blocking-io
  mode.
2024-08-15 22:47:20 +02:00
Karolin Varner
fd0f35b279 chore: gen-key subcommand should show canonical paths 2024-08-15 22:12:02 +02:00
Karolin Varner
8808ed5dbc fix: Quiet log level should be warn 2024-08-15 09:43:25 +02:00
Karolin Varner
6fc45cab53 chore: prettier 2024-08-15 08:55:13 +02:00
Katherine Watson
1f7196e473 doc: Add documentation for testing 2024-08-14 19:49:00 -07:00
Katherine Watson
c359b87d0c chore: Convert broker interface setup to use mio's UnixStream where possible 2024-08-14 19:03:45 -07:00
Katherine Watson
355b48169b chore: Make MiobrokerClient import conditional 2024-08-14 19:03:45 -07:00
Katherine Watson
274d245bed chore: Unify enable_wg_broker and enable_broker_api features 2024-08-14 19:03:45 -07:00
Katherine Watson
065b0fcc8a feat: Add enable_wg_broker feature using MioBrokerClient
doc: Add documentation for new methods and arguments

fix: Require new psk_broker_spawn flag to use broker without extra parameters, to make all-features cargo test pass

fix: Fix MioBrokerClient buffer size to allow room for length prefix

fix: Fix remaining issue with panic
2024-08-14 19:03:44 -07:00
dependabot[bot]
191fb10663 build(deps): bump mio from 1.0.1 to 1.0.2
Bumps [mio](https://github.com/tokio-rs/mio) from 1.0.1 to 1.0.2.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v1.0.1...v1.0.2)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-14 09:28:27 +02:00
dependabot[bot]
3faa84117f build(deps): bump tokio from 1.39.1 to 1.39.2
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.39.1 to 1.39.2.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.39.1...tokio-1.39.2)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-13 13:14:15 +02:00
dependabot[bot]
fda75a0184 build(deps): bump serde from 1.0.204 to 1.0.207
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.204 to 1.0.207.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.204...v1.0.207)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-13 13:14:03 +02:00
dependabot[bot]
96b1f6c0d3 build(deps): bump procspawn from 1.0.0 to 1.0.1 (#390)
Bumps [procspawn](https://github.com/mitsuhiko/procspawn) from 1.0.0 to 1.0.1.
- [Changelog](https://github.com/mitsuhiko/procspawn/blob/master/CHANGELOG.md)
- [Commits](https://github.com/mitsuhiko/procspawn/compare/1.0.0...1.0.1)

---
updated-dependencies:
- dependency-name: procspawn
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-13 08:15:57 +02:00
dependabot[bot]
fb73c68626 build(deps): bump tempfile from 3.10.1 to 3.11.0 (#387)
Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.10.1 to 3.11.0.
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Stebalien/tempfile/compare/v3.10.1...v3.11.0)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-13 08:15:46 +02:00
dependabot[bot]
42b0e23695 build(deps): bump clap from 4.5.13 to 4.5.15 (#397)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.13 to 4.5.15.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.13...clap_complete-v4.5.15)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-08-13 08:13:06 +02:00
Karolin Varner
c58f832727 Merge pull request #391 from aparcar/pb
add test cases for util modules
2024-08-12 16:26:01 +02:00
Paul Spooren
7b6a9eebc1 ci: test full workspace with codecov
Previously only the default members were checked for coverage.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-12 12:10:47 +02:00
Paul Spooren
4554dc4bb3 ci: drop codecov token
It's not needed to see generate results for pull requests.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-12 11:44:33 +02:00
Paul Spooren
465c6beaab ci: switch to codecov action v4 branch
Instead of using a specific version, use branch v4 which stays API
compatible.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-12 11:43:26 +02:00
Paul Spooren
1853e0a3c0 feat: add test case and check fd value
Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-08-12 11:37:15 +02:00
Benjamin Lipp
245d4d1a0f feat: add tests for util file.rs
Co-authored-by: Paul Spooren <mail@aparcar.org>
2024-08-12 11:37:15 +02:00
Karolin Varner
d5d15cd9bc Merge Rosenpass API infrastructure
Pull request #388 from rosenpass/dev/karo/api
2024-08-08 22:02:04 +02:00
114 changed files with 8186 additions and 2088 deletions

14
.ci/boot_race/a.toml Normal file
View File

@@ -0,0 +1,14 @@
public_key = "rp-a-public-key"
secret_key = "rp-a-secret-key"
listen = ["127.0.0.1:9999"]
verbosity = "Verbose"
[api]
listen_path = []
listen_fd = []
stream_fd = []
[[peers]]
public_key = "rp-b-public-key"
endpoint = "127.0.0.1:9998"
key_out = "rp-b-key-out.txt"

14
.ci/boot_race/b.toml Normal file
View File

@@ -0,0 +1,14 @@
public_key = "rp-b-public-key"
secret_key = "rp-b-secret-key"
listen = ["127.0.0.1:9998"]
verbosity = "Verbose"
[api]
listen_path = []
listen_fd = []
stream_fd = []
[[peers]]
public_key = "rp-a-public-key"
endpoint = "127.0.0.1:9999"
key_out = "rp-a-key-out.txt"

48
.ci/boot_race/run.sh Normal file
View File

@@ -0,0 +1,48 @@
#!/bin/bash
iterations="$1"
sleep_time="$2"
config_a="$3"
config_b="$4"
PWD="$(pwd)"
EXEC="$PWD/target/release/rosenpass"
i=0
while [ "$i" -ne "$iterations" ]; do
echo "=> Iteration $i"
# flush the PSK files
echo "A" >rp-a-key-out.txt
echo "B" >rp-b-key-out.txt
# start the two instances
echo "Starting instance A"
"$EXEC" exchange-config "$config_a" &
PID_A=$!
sleep "$sleep_time"
echo "Starting instance B"
"$EXEC" exchange-config "$config_b" &
PID_B=$!
# give the key exchange some time to complete
sleep 3
# kill the instances
kill $PID_A
kill $PID_B
# compare the keys
if cmp -s rp-a-key-out.txt rp-b-key-out.txt; then
echo "Keys match"
else
echo "::warning title=Key Exchange Race Condition::The key exchange resulted in different keys. Delay was ${sleep_time}s."
# TODO: set this to 1 when the race condition is fixed
exit 0
fi
# give the instances some time to shut down
sleep 2
i=$((i + 1))
done

View File

@@ -4,3 +4,7 @@ updates:
directory: "/"
schedule:
interval: "daily"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"

View File

@@ -13,10 +13,10 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Clone rosenpass-website repository
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
repository: rosenpass/rosenpass-website
ref: main

View File

@@ -6,6 +6,11 @@ on:
push:
branches:
- main
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
i686-linux---default:
name: Build i686-linux.default
@@ -14,11 +19,11 @@ jobs:
needs:
- i686-linux---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -30,11 +35,11 @@ jobs:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -47,11 +52,11 @@ jobs:
needs:
- i686-linux---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -62,11 +67,11 @@ jobs:
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -79,11 +84,11 @@ jobs:
needs:
- x86_64-darwin---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -98,11 +103,11 @@ jobs:
- x86_64-darwin---rp
- x86_64-darwin---rosenpass-oci-image
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -114,11 +119,11 @@ jobs:
- macos-13
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -130,11 +135,11 @@ jobs:
- macos-13
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -147,11 +152,11 @@ jobs:
needs:
- x86_64-darwin---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -162,11 +167,11 @@ jobs:
runs-on:
- macos-13
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -179,11 +184,11 @@ jobs:
needs:
- x86_64-linux---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -196,11 +201,11 @@ jobs:
needs:
- x86_64-linux---proverif-patched
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -212,11 +217,11 @@ jobs:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -231,51 +236,51 @@ jobs:
- x86_64-linux---rosenpass-static-oci-image
- x86_64-linux---rp-static
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.release-package --print-build-logs
aarch64-linux---release-package:
name: Build aarch64-linux.release-package
runs-on:
- ubuntu-latest
needs:
- aarch64-linux---rosenpass-oci-image
- aarch64-linux---rosenpass
- aarch64-linux---rp
steps:
- run: |
DEBIAN_FRONTEND=noninteractive
sudo apt-get update -q -y && sudo apt-get install -q -y qemu-system-aarch64 qemu-efi binfmt-support qemu-user-static
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.aarch64-linux.release-package --print-build-logs
# aarch64-linux---release-package:
# name: Build aarch64-linux.release-package
# runs-on:
# - ubuntu-latest
# needs:
# - aarch64-linux---rosenpass-oci-image
# - aarch64-linux---rosenpass
# - aarch64-linux---rp
# steps:
# - run: |
# DEBIAN_FRONTEND=noninteractive
# sudo apt-get update -q -y && sudo apt-get install -q -y qemu-system-aarch64 qemu-efi binfmt-support qemu-user-static
# - uses: actions/checkout@v4
# - uses: cachix/install-nix-action@v30
# with:
# nix_path: nixpkgs=channel:nixos-unstable
# extra_nix_config: |
# system = aarch64-linux
# - uses: cachix/cachix-action@v15
# with:
# name: rosenpass
# authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
# - name: Build
# run: nix build .#packages.aarch64-linux.release-package --print-build-logs
x86_64-linux---rosenpass:
name: Build x86_64-linux.rosenpass
runs-on:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -290,13 +295,13 @@ jobs:
- run: |
DEBIAN_FRONTEND=noninteractive
sudo apt-get update -q -y && sudo apt-get install -q -y qemu-system-aarch64 qemu-efi binfmt-support qemu-user-static
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -311,13 +316,13 @@ jobs:
- run: |
DEBIAN_FRONTEND=noninteractive
sudo apt-get update -q -y && sudo apt-get install -q -y qemu-system-aarch64 qemu-efi binfmt-support qemu-user-static
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -330,11 +335,11 @@ jobs:
needs:
- x86_64-linux---rosenpass
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -350,13 +355,13 @@ jobs:
- run: |
DEBIAN_FRONTEND=noninteractive
sudo apt-get update -q -y && sudo apt-get install -q -y qemu-system-aarch64 qemu-efi binfmt-support qemu-user-static
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -368,11 +373,11 @@ jobs:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -384,11 +389,11 @@ jobs:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -401,11 +406,11 @@ jobs:
needs:
- x86_64-linux---rosenpass-static
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -417,11 +422,11 @@ jobs:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -432,11 +437,11 @@ jobs:
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -447,11 +452,11 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.ref == 'refs/heads/main' }}
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -460,7 +465,7 @@ jobs:
- name: Build
run: nix build .#packages.x86_64-linux.whitepaper --print-build-logs
- name: Deploy PDF artifacts
uses: peaceiris/actions-gh-pages@v3
uses: peaceiris/actions-gh-pages@v4
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: result/

View File

@@ -4,6 +4,10 @@ on:
push:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
checks: write
contents: read
@@ -12,8 +16,8 @@ jobs:
prettier:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actionsx/prettier@v2
- uses: actions/checkout@v4
- uses: actionsx/prettier@v3
with:
args: --check .
@@ -21,7 +25,7 @@ jobs:
name: Shellcheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Run ShellCheck
uses: ludeeus/action-shellcheck@master
@@ -29,15 +33,15 @@ jobs:
name: Rust Format
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Run Rust Formatting Script
run: bash format_rust_code.sh --mode check
cargo-bench:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -57,16 +61,14 @@ jobs:
steps:
- name: Install mandoc
run: sudo apt-get install -y mandoc
- uses: actions/checkout@v3
- name: Check rosenpass.1
run: doc/check.sh doc/rosenpass.1
- uses: actions/checkout@v4
- name: Check rp.1
run: doc/check.sh doc/rp.1
cargo-audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- uses: actions-rs/audit-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
@@ -74,8 +76,8 @@ jobs:
cargo-clippy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -93,8 +95,8 @@ jobs:
cargo-doc:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -117,8 +119,8 @@ jobs:
# - ubuntu is x86-64
# - macos-13 is also x86-64 architecture
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -136,8 +138,8 @@ jobs:
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -146,10 +148,10 @@ jobs:
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- uses: cachix/install-nix-action@v21
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
@@ -158,8 +160,8 @@ jobs:
cargo-fuzz:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/cache@v3
- uses: actions/checkout@v4
- uses: actions/cache@v4
with:
path: |
~/.cargo/bin/
@@ -191,17 +193,20 @@ jobs:
codecov:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- run: rustup component add llvm-tools-preview
- run: |
cargo install cargo-llvm-cov || true
cargo llvm-cov --lcov --output-path coverage.lcov
cargo llvm-cov \
--workspace\
--all-features \
--lcov \
--output-path coverage.lcov
# If using tarapulin
#- run: cargo install cargo-tarpaulin
#- run: cargo tarpaulin --out Xml
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v4.0.1
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./coverage.lcov
verbose: true

View File

@@ -1,9 +1,13 @@
name: QC
name: Regressions
on:
pull_request:
push:
branches: [main]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
checks: write
contents: read
@@ -12,10 +16,22 @@ jobs:
multi-peer:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- run: cargo build --bin rosenpass --release
- run: python misc/generate_configs.py
- run: chmod +x .ci/run-regression.sh
- run: .ci/run-regression.sh 100 20
- run: |
[ $(ls -1 output/ate/out | wc -l) -eq 100 ]
boot-race:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: cargo build --bin rosenpass --release
- run: chmod +x .ci/boot_race/run.sh
- run: cargo run --release --bin rosenpass gen-keys .ci/boot_race/a.toml
- run: cargo run --release --bin rosenpass gen-keys .ci/boot_race/b.toml
- run: .ci/boot_race/run.sh 5 2 .ci/boot_race/a.toml .ci/boot_race/b.toml
- run: .ci/boot_race/run.sh 5 1 .ci/boot_race/a.toml .ci/boot_race/b.toml
- run: .ci/boot_race/run.sh 5 0 .ci/boot_race/a.toml .ci/boot_race/b.toml

View File

@@ -11,18 +11,18 @@ jobs:
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build release
run: nix build .#release-package --print-build-logs
- name: Release
uses: softprops/action-gh-release@v1
uses: softprops/action-gh-release@v2
with:
draft: ${{ contains(github.ref_name, 'rc') }}
prerelease: ${{ contains(github.ref_name, 'alpha') || contains(github.ref_name, 'beta') }}
@@ -32,18 +32,18 @@ jobs:
runs-on:
- macos-13
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build release
run: nix build .#release-package --print-build-logs
- name: Release
uses: softprops/action-gh-release@v1
uses: softprops/action-gh-release@v2
with:
draft: ${{ contains(github.ref_name, 'rc') }}
prerelease: ${{ contains(github.ref_name, 'alpha') || contains(github.ref_name, 'beta') }}
@@ -53,18 +53,18 @@ jobs:
runs-on:
- ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v30
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
- uses: cachix/cachix-action@v15
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build release
run: nix build .#release-package --print-build-logs
- name: Release
uses: softprops/action-gh-release@v1
uses: softprops/action-gh-release@v2
with:
draft: ${{ contains(github.ref_name, 'rc') }}
prerelease: ${{ contains(github.ref_name, 'alpha') || contains(github.ref_name, 'beta') }}

View File

@@ -1,4 +1,5 @@
.direnv/
flake.lock
papers/whitepaper.md
target/
src/usage.md
target/

View File

@@ -8,7 +8,7 @@ If any other issue occurs
1. Make sure you locally checked out the head of the main branch
- `git stash --include-untracked && git checkout main && git pull`
2. Make sure all tests pass
- `cargo test`
- `cargo test --workspace --all-features`
3. Make sure the current version in `rosenpass/Cargo.toml` matches that in the [last release on GitHub](https://github.com/rosenpass/rosenpass/releases)
- Only normal releases count, release candidates and draft releases can be ignored
4. Pick the kind of release that you want to make (`major`, `minor`, `patch`, `rc`, ...)

1039
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -32,55 +32,61 @@ rosenpass-secret-memory = { path = "secret-memory" }
rosenpass-oqs = { path = "oqs" }
rosenpass-wireguard-broker = { path = "wireguard-broker" }
doc-comment = "0.3.3"
base64ct = {version = "1.6.0", default-features=false}
base64ct = { version = "1.6.0", default-features = false }
zeroize = "1.8.1"
memoffset = "0.9.1"
thiserror = "1.0.63"
thiserror = "1.0.69"
paste = "1.0.15"
env_logger = "0.10.2"
toml = "0.7.8"
static_assertions = "1.1.0"
allocator-api2 = "0.2.14"
memsec = { git="https://github.com/rosenpass/memsec.git" ,rev="aceb9baee8aec6844125bd6612f92e9a281373df", features = [ "alloc_ext", ] }
memsec = { git = "https://github.com/rosenpass/memsec.git", rev = "aceb9baee8aec6844125bd6612f92e9a281373df", features = [
"alloc_ext",
] }
rand = "0.8.5"
typenum = "1.17.0"
log = { version = "0.4.22" }
clap = { version = "4.5.13", features = ["derive"] }
serde = { version = "1.0.204", features = ["derive"] }
arbitrary = { version = "1.3.2", features = ["derive"] }
anyhow = { version = "1.0.86", features = ["backtrace", "std"] }
mio = { version = "1.0.1", features = ["net", "os-poll"] }
clap = { version = "4.5.22", features = ["derive"] }
clap_mangen = "0.2.24"
clap_complete = "4.5.38"
serde = { version = "1.0.215", features = ["derive"] }
arbitrary = { version = "1.4.1", features = ["derive"] }
anyhow = { version = "1.0.94", features = ["backtrace", "std"] }
mio = { version = "1.0.3", features = ["net", "os-poll"] }
oqs-sys = { version = "0.9.1", default-features = false, features = [
'classic_mceliece',
'kyber',
'classic_mceliece',
'kyber',
] }
blake2 = "0.10.6"
chacha20poly1305 = { version = "0.10.1", default-features = false, features = [
"std",
"heapless",
"std",
"heapless",
] }
zerocopy = { version = "0.7.35", features = ["derive"] }
home = "0.5.9"
derive_builder = "0.20.0"
tokio = { version = "1.39", features = ["macros", "rt-multi-thread"] }
postcard= {version = "1.0.8", features = ["alloc"]}
derive_builder = "0.20.1"
tokio = { version = "1.42", features = ["macros", "rt-multi-thread"] }
postcard = { version = "1.1.1", features = ["alloc"] }
libcrux = { version = "0.0.2-pre.2" }
hex-literal = { version = "0.4.1" }
hex = { version = "0.4.3" }
heck = { version = "0.5.0" }
heck = { version = "0.5.0" }
libc = { version = "0.2" }
uds = { git = "https://github.com/rosenpass/uds" }
#Dev dependencies
serial_test = "3.1.1"
serial_test = "3.2.0"
tempfile = "3"
stacker = "0.1.15"
stacker = "0.1.17"
libfuzzer-sys = "0.4"
test_bin = "0.4.0"
criterion = "0.4.0"
allocator-api2-tests = "0.2.15"
procspawn = {version = "1.0.0", features= ["test-support"]}
procspawn = { version = "1.0.1", features = ["test-support"] }
#Broker dependencies (might need cleanup or changes)
wireguard-uapi = { version = "3.0.0", features = ["xplatform"] }
command-fds = "0.2.3"
rustix = { version = "0.38.27", features = ["net", "fs"] }
rustix = { version = "0.38.41", features = ["net", "fs"] }

View File

@@ -23,4 +23,4 @@ static_assertions = { workspace = true }
zeroize = { workspace = true }
chacha20poly1305 = { workspace = true }
blake2 = { workspace = true }
libcrux = { workspace = true, optional = true }
libcrux = { workspace = true, optional = true }

View File

@@ -1,7 +1,15 @@
//! Constant-time comparison
use core::ptr;
/// Little endian memcmp version of quinier/memsec
/// https://github.com/quininer/memsec/blob/bbc647967ff6d20d6dccf1c85f5d9037fcadd3b0/src/lib.rs#L30
///
/// # Panic & Safety
///
/// Both input arrays must be at least of the indicated length.
///
/// See [std::ptr::read_volatile] on safety.
#[inline(never)]
pub unsafe fn memcmp_le(b1: *const u8, b2: *const u8, len: usize) -> i32 {
let mut res = 0;
@@ -13,6 +21,16 @@ pub unsafe fn memcmp_le(b1: *const u8, b2: *const u8, len: usize) -> i32 {
((res - 1) >> 8) + (res >> 8) + 1
}
#[test]
pub fn memcmp_le_test() {
// use rosenpass_constant_time::memcmp_le;
let a = [0, 1, 0, 0];
let b = [0, 0, 0, 1];
assert_eq!(-1, unsafe { memcmp_le(a.as_ptr(), b.as_ptr(), 4) });
assert_eq!(0, unsafe { memcmp_le(a.as_ptr(), a.as_ptr(), 4) });
assert_eq!(1, unsafe { memcmp_le(b.as_ptr(), a.as_ptr(), 4) });
}
/// compares two slices of memory content and returns an integer indicating the relationship between
/// the slices
///
@@ -32,6 +50,28 @@ pub unsafe fn memcmp_le(b1: *const u8, b2: *const u8, len: usize) -> i32 {
/// ## Tests
/// For discussion on how to ensure the constant-time execution of this function, see
/// <https://github.com/rosenpass/rosenpass/issues/232>
///
/// # Examples
///
/// ```rust
/// use rosenpass_constant_time::compare;
/// let a = [0, 1, 0, 0];
/// let b = [0, 0, 0, 1];
/// assert_eq!(-1, compare(&a, &b));
/// assert_eq!(0, compare(&a, &a));
/// assert_eq!(1, compare(&b, &a));
/// ```
///
/// # Panic
///
/// This function will panic if the input arrays are of different lengths.
///
/// ```should_panic
/// use rosenpass_constant_time::compare;
/// let a = [0, 1, 0];
/// let b = [0, 0, 0, 1];
/// compare(&a, &b);
/// ```
#[inline]
pub fn compare(a: &[u8], b: &[u8]) -> i32 {
assert!(a.len() == b.len());

View File

@@ -1,3 +1,5 @@
//! Incrementing numbers
use core::hint::black_box;
/// Interpret the given slice as a little-endian unsigned integer

View File

@@ -1,3 +1,5 @@
#![warn(missing_docs)]
#![warn(clippy::missing_docs_in_private_items)]
//! constant-time implementations of some primitives
//!
//! Rosenpass internal library providing basic constant-time operations.

View File

@@ -1,3 +1,5 @@
//! memcmp
/// compares two sclices of memory content and returns whether they are equal
///
/// ## Leaks
@@ -7,6 +9,18 @@
///
/// The execution time of the function grows approx. linear with the length of the input. This is
/// considered safe.
///
/// ## Examples
///
/// ```rust
/// use rosenpass_constant_time::memcmp;
/// let a = [0, 0, 0, 0];
/// let b = [0, 0, 0, 1];
/// let c = [0, 0, 0];
/// assert!(memcmp(&a, &a));
/// assert!(!memcmp(&a, &b));
/// assert!(!memcmp(&a, &c));
/// ```
#[inline]
pub fn memcmp(a: &[u8], b: &[u8]) -> bool {
a.len() == b.len() && unsafe { memsec::memeq(a.as_ptr(), b.as_ptr(), a.len()) }

View File

@@ -1,3 +1,5 @@
//! xor
use core::hint::black_box;
use rosenpass_to::{with_destination, To};

View File

@@ -1,114 +0,0 @@
.Dd $Mdocdate$
.Dt ROSENPASS 1
.Os
.Sh NAME
.Nm rosenpass
.Nd builds post-quantum-secure VPNs
.Sh SYNOPSIS
.Nm
.Op COMMAND
.Op Ar OPTIONS ...
.Op Ar ARGS ...
.Sh DESCRIPTION
.Nm
performs cryptographic key exchanges that are secure against quantum-computers
and then outputs the keys.
These keys can then be passed to various services, such as wireguard or other
vpn services, as pre-shared-keys to achieve security against attackers with
quantum computers.
.Pp
This is a research project and quantum computers are not thought to become
practical in fewer than ten years.
If you are not specifically tasked with developing post-quantum secure systems,
you probably do not need this tool.
.Ss COMMANDS
.Bl -tag -width Ds
.It Ar gen-keys --secret-key <file-path> --public-key <file-path>
Generate a keypair to use in the exchange command later.
Send the public-key file to your communication partner and keep the private-key
file secret!
.It Ar exchange private-key <file-path> public-key <file-path> [ OPTIONS ] PEERS
Start a process to exchange keys with the specified peers.
You should specify at least one peer.
.Pp
Its
.Ar OPTIONS
are as follows:
.Bl -tag -width Ds
.It Ar listen <ip>[:<port>]
Instructs
.Nm
to listen on the specified interface and port.
By default,
.Nm
will listen on all interfaces and select a random port.
.It Ar verbose
Extra logging.
.El
.El
.Ss PEER
Each
.Ar PEER
is defined as follows:
.Qq peer public-key <file-path> [endpoint <ip>[:<port>]] [preshared-key <file-path>] [outfile <file-path>] [wireguard <dev> <peer> <extra_params>]
.Pp
Providing a
.Ar PEER
instructs
.Nm
to exchange keys with the given peer and write the resulting PSK into the given
output file.
You must either specify the outfile or wireguard output option.
.Pp
The parameters of
.Ar PEER
are as follows:
.Bl -tag -width Ds
.It Ar endpoint <ip>[:<port>]
Specifies the address where the peer can be reached.
This will be automatically updated after the first successful key exchange with
the peer.
If this is unspecified, the peer must initiate the connection.
.It Ar preshared-key <file-path>
You may specify a pre-shared key which will be mixed into the final secret.
.It Ar outfile <file-path>
You may specify a file to write the exchanged keys to.
If this option is specified,
.Nm
will write a notification to standard out every time the key is updated.
.It Ar wireguard <dev> <peer> <extra_params>
This allows you to directly specify a wireguard peer to deploy the
pre-shared-key to.
You may specify extra parameters you would pass to
.Qq wg set
besides the preshared-key parameter which is used by
.Nm .
This makes it possible to add peers entirely from
.Nm .
.El
.Sh EXIT STATUS
.Ex -std
.Sh SEE ALSO
.Xr rp 1 ,
.Xr wg 1
.Rs
.%A Karolin Varner
.%A Benjamin Lipp
.%A Wanja Zaeske
.%A Lisa Schmidt
.%D 2023
.%T Rosenpass
.%U https://rosenpass.eu/whitepaper.pdf
.Re
.Sh STANDARDS
This tool is the reference implementation of the Rosenpass protocol, as
specified within the whitepaper referenced above.
.Sh AUTHORS
Rosenpass was created by Karolin Varner, Benjamin Lipp, Wanja Zaeske,
Marei Peischl, Stephan Ajuvo, and Lisa Schmidt.
.Pp
This manual page was written by
.An Clara Engler
.Sh BUGS
The bugs are tracked at
.Lk https://github.com/rosenpass/rosenpass/issues .

49
flake.lock generated
View File

@@ -2,15 +2,17 @@
"nodes": {
"fenix": {
"inputs": {
"nixpkgs": ["nixpkgs"],
"nixpkgs": [
"nixpkgs"
],
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1712298178,
"narHash": "sha256-590fpCPXYAkaAeBz/V91GX4/KGzPObdYtqsTWzT6AhI=",
"lastModified": 1728282832,
"narHash": "sha256-I7AbcwGggf+CHqpyd/9PiAjpIBGTGx5woYHqtwxaV7I=",
"owner": "nix-community",
"repo": "fenix",
"rev": "569b5b5781395da08e7064e825953c548c26af76",
"rev": "1ec71be1f4b8f3105c5d38da339cb061fefc43f4",
"type": "github"
},
"original": {
@@ -24,11 +26,11 @@
"systems": "systems"
},
"locked": {
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"lastModified": 1726560853,
"narHash": "sha256-X6rJYSESBVr3hBoH0WbKE5KvhPU5bloyZ2L4K60/fPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"rev": "c1dfcf08411b08f6b8615f7d8971a2bfa81d5e8a",
"type": "github"
},
"original": {
@@ -37,36 +39,18 @@
"type": "github"
}
},
"naersk": {
"inputs": {
"nixpkgs": ["nixpkgs"]
},
"locked": {
"lastModified": 1698420672,
"narHash": "sha256-/TdeHMPRjjdJub7p7+w55vyABrsJlt5QkznPYy55vKA=",
"owner": "nix-community",
"repo": "naersk",
"rev": "aeb58d5e8faead8980a807c840232697982d47b9",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "naersk",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1712168706,
"narHash": "sha256-XP24tOobf6GGElMd0ux90FEBalUtw6NkBSVh/RlA6ik=",
"lastModified": 1728193676,
"narHash": "sha256-PbDWAIjKJdlVg+qQRhzdSor04bAPApDqIv2DofTyynk=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "1487bdea619e4a7a53a4590c475deabb5a9d1bfb",
"rev": "ecbc1ca8ffd6aea8372ad16be9ebbb39889e55b6",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-23.11",
"ref": "nixos-24.05",
"repo": "nixpkgs",
"type": "github"
}
@@ -75,18 +59,17 @@
"inputs": {
"fenix": "fenix",
"flake-utils": "flake-utils",
"naersk": "naersk",
"nixpkgs": "nixpkgs"
}
},
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1712156296,
"narHash": "sha256-St7ZQrkrr5lmQX9wC1ZJAFxL8W7alswnyZk9d1se3Us=",
"lastModified": 1728249780,
"narHash": "sha256-J269DvCI5dzBmPrXhAAtj566qt0b22TJtF3TIK+tMsI=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "8e581ac348e223488622f4d3003cb2bd412bf27e",
"rev": "2b750da1a1a2c1d2c70896108d7096089842d877",
"type": "github"
},
"original": {

413
flake.nix
View File

@@ -1,12 +1,8 @@
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.11";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.05";
flake-utils.url = "github:numtide/flake-utils";
# for quicker rust builds
naersk.url = "github:nix-community/naersk";
naersk.inputs.nixpkgs.follows = "nixpkgs";
# for rust nightly with llvm-tools-preview
fenix.url = "github:nix-community/fenix";
fenix.inputs.nixpkgs.follows = "nixpkgs";
@@ -15,6 +11,15 @@
outputs = { self, nixpkgs, flake-utils, ... }@inputs:
nixpkgs.lib.foldl (a: b: nixpkgs.lib.recursiveUpdate a b) { } [
#
### Export the overlay.nix from this flake ###
#
{
overlays.default = import ./overlay.nix;
}
#
### Actual Rosenpass Package and Docker Container Images ###
#
@@ -30,310 +35,39 @@
]
(system:
let
scoped = (scope: scope.result);
lib = nixpkgs.lib;
# normal nixpkgs
pkgs = import nixpkgs {
inherit system;
};
# parsed Cargo.toml
cargoToml = builtins.fromTOML (builtins.readFile ./rosenpass/Cargo.toml);
# source files relevant for rust
src = scoped rec {
# File suffices to include
extensions = [
"lock"
"rs"
"toml"
];
# Files to explicitly include
files = [
"to/README.md"
];
src = ./.;
filter = (path: type: scoped rec {
inherit (lib) any id removePrefix hasSuffix;
anyof = (any id);
basename = baseNameOf (toString path);
relative = removePrefix (toString src + "/") (toString path);
result = anyof [
(type == "directory")
(any (ext: hasSuffix ".${ext}" basename) extensions)
(any (file: file == relative) files)
];
});
result = pkgs.lib.sources.cleanSourceWith { inherit src filter; };
};
# a function to generate a nix derivation for rosenpass against any
# given set of nixpkgs
rosenpassDerivation = p:
let
# whether we want to build a statically linked binary
isStatic = p.targetPlatform.isStatic;
# the rust target of `p`
target = p.rust.toRustTargetSpec p.targetPlatform;
# convert a string to shout case
shout = string: builtins.replaceStrings [ "-" ] [ "_" ] (pkgs.lib.toUpper string);
# suitable Rust toolchain
toolchain = with inputs.fenix.packages.${system}; combine [
stable.cargo
stable.rustc
targets.${target}.stable.rust-std
];
# naersk with a custom toolchain
naersk = pkgs.callPackage inputs.naersk {
cargo = toolchain;
rustc = toolchain;
};
# used to trick the build.rs into believing that CMake was ran **again**
fakecmake = pkgs.writeScriptBin "cmake" ''
#! ${pkgs.stdenv.shell} -e
true
'';
in
naersk.buildPackage
{
# metadata and source
name = cargoToml.package.name;
version = cargoToml.package.version;
inherit src;
cargoBuildOptions = x: x ++ [ "-p" "rosenpass" ];
cargoTestOptions = x: x ++ [ "-p" "rosenpass" ];
doCheck = true;
nativeBuildInputs = with pkgs; [
p.stdenv.cc
cmake # for oqs build in the oqs-sys crate
mandoc # for the built-in manual
removeReferencesTo
rustPlatform.bindgenHook # for C-bindings in the crypto libs
];
buildInputs = with p; [ bash ];
override = x: {
preBuild =
# nix defaults to building for aarch64 _without_ the armv8-a crypto
# extensions, but liboqs depens on these
(lib.optionalString (system == "aarch64-linux") ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -march=armv8-a+crypto"
''
);
# fortify is only compatible with dynamic linking
hardeningDisable = lib.optional isStatic "fortify";
};
overrideMain = x: {
# CMake detects that it was served a _foreign_ target dir, and CMake
# would be executed again upon the second build step of naersk.
# By adding our specially optimized CMake version, we reduce the cost
# of recompilation by 99 % while, while avoiding any CMake errors.
nativeBuildInputs = [ (lib.hiPrio fakecmake) ] ++ x.nativeBuildInputs;
# make sure that libc is linked, under musl this is not the case per
# default
preBuild = (lib.optionalString isStatic ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -lc"
'');
};
# We want to build for a specific target...
CARGO_BUILD_TARGET = target;
# ... which might require a non-default linker:
"CARGO_TARGET_${shout target}_LINKER" =
let
inherit (p.stdenv) cc;
in
"${cc}/bin/${cc.targetPrefix}cc";
meta = with pkgs.lib;
{
inherit (cargoToml.package) description homepage;
license = with licenses; [ mit asl20 ];
maintainers = [ maintainers.wucke13 ];
platforms = platforms.all;
};
} // (lib.mkIf isStatic {
# otherwise pkg-config tries to link non-existent dynamic libs
# documented here: https://docs.rs/pkg-config/latest/pkg_config/
PKG_CONFIG_ALL_STATIC = true;
# tell rust to build everything statically linked
CARGO_BUILD_RUSTFLAGS = "-C target-feature=+crt-static";
});
# a function to generate a nix derivation for the rp helper against any
# given set of nixpkgs
rpDerivation = p:
let
# whether we want to build a statically linked binary
isStatic = p.targetPlatform.isStatic;
# the rust target of `p`
target = p.rust.toRustTargetSpec p.targetPlatform;
# convert a string to shout case
shout = string: builtins.replaceStrings [ "-" ] [ "_" ] (pkgs.lib.toUpper string);
# suitable Rust toolchain
toolchain = with inputs.fenix.packages.${system}; combine [
stable.cargo
stable.rustc
targets.${target}.stable.rust-std
];
# naersk with a custom toolchain
naersk = pkgs.callPackage inputs.naersk {
cargo = toolchain;
rustc = toolchain;
};
# used to trick the build.rs into believing that CMake was ran **again**
fakecmake = pkgs.writeScriptBin "cmake" ''
#! ${pkgs.stdenv.shell} -e
true
'';
in
naersk.buildPackage
{
# metadata and source
name = cargoToml.package.name;
version = cargoToml.package.version;
inherit src;
cargoBuildOptions = x: x ++ [ "-p" "rp" ];
cargoTestOptions = x: x ++ [ "-p" "rp" ];
doCheck = true;
nativeBuildInputs = with pkgs; [
p.stdenv.cc
cmake # for oqs build in the oqs-sys crate
mandoc # for the built-in manual
removeReferencesTo
rustPlatform.bindgenHook # for C-bindings in the crypto libs
];
buildInputs = with p; [ bash ];
override = x: {
preBuild =
# nix defaults to building for aarch64 _without_ the armv8-a crypto
# extensions, but liboqs depens on these
(lib.optionalString (system == "aarch64-linux") ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -march=armv8-a+crypto"
''
);
# fortify is only compatible with dynamic linking
hardeningDisable = lib.optional isStatic "fortify";
};
overrideMain = x: {
# CMake detects that it was served a _foreign_ target dir, and CMake
# would be executed again upon the second build step of naersk.
# By adding our specially optimized CMake version, we reduce the cost
# of recompilation by 99 % while, while avoiding any CMake errors.
nativeBuildInputs = [ (lib.hiPrio fakecmake) ] ++ x.nativeBuildInputs;
# make sure that libc is linked, under musl this is not the case per
# default
preBuild = (lib.optionalString isStatic ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -lc"
'');
};
# We want to build for a specific target...
CARGO_BUILD_TARGET = target;
# ... which might require a non-default linker:
"CARGO_TARGET_${shout target}_LINKER" =
let
inherit (p.stdenv) cc;
in
"${cc}/bin/${cc.targetPrefix}cc";
meta = with pkgs.lib;
{
inherit (cargoToml.package) description homepage;
license = with licenses; [ mit asl20 ];
maintainers = [ maintainers.wucke13 ];
platforms = platforms.all;
};
} // (lib.mkIf isStatic {
# otherwise pkg-config tries to link non-existent dynamic libs
# documented here: https://docs.rs/pkg-config/latest/pkg_config/
PKG_CONFIG_ALL_STATIC = true;
# tell rust to build everything statically linked
CARGO_BUILD_RUSTFLAGS = "-C target-feature=+crt-static";
});
# a function to generate a docker image based of rosenpass
rosenpassOCI = name: pkgs.dockerTools.buildImage rec {
inherit name;
copyToRoot = pkgs.buildEnv {
name = "image-root";
paths = [ self.packages.${system}.${name} ];
pathsToLink = [ "/bin" ];
};
config.Cmd = [ "/bin/rosenpass" ];
# apply our own overlay, overriding/inserting our packages as defined in ./pkgs
overlays = [ self.overlays.default ];
};
in
rec {
packages = rec {
default = rosenpass;
rosenpass = rosenpassDerivation pkgs;
rp = rpDerivation pkgs;
rosenpass-oci-image = rosenpassOCI "rosenpass";
{
packages = {
default = pkgs.rosenpass;
rosenpass = pkgs.rosenpass;
rosenpass-oci-image = pkgs.rosenpass-oci-image;
rp = pkgs.rp;
# derivation for the release
release-package =
let
version = cargoToml.package.version;
package =
if pkgs.hostPlatform.isLinux then
packages.rosenpass-static
else packages.rosenpass;
rp =
if pkgs.hostPlatform.isLinux then
packages.rp-static
else packages.rp;
oci-image =
if pkgs.hostPlatform.isLinux then
packages.rosenpass-static-oci-image
else packages.rosenpass-oci-image;
in
pkgs.runCommandNoCC "lace-result" { }
''
mkdir {bin,$out}
tar -cvf $out/rosenpass-${system}-${version}.tar \
-C ${package} bin/rosenpass \
-C ${rp} bin/rp
cp ${oci-image} \
$out/rosenpass-oci-image-${system}-${version}.tar.gz
'';
} // (if pkgs.stdenv.isLinux then rec {
rosenpass-static = rosenpassDerivation pkgs.pkgsStatic;
rp-static = rpDerivation pkgs.pkgsStatic;
rosenpass-static-oci-image = rosenpassOCI "rosenpass-static";
} else { });
release-package = pkgs.release-package;
# for good measure, we also offer to cross compile to Linux on Arm
aarch64-linux-rosenpass-static =
pkgs.pkgsCross.aarch64-multiplatform.pkgsStatic.rosenpass;
aarch64-linux-rp-static = pkgs.pkgsCross.aarch64-multiplatform.pkgsStatic.rp;
}
//
# We only offer static builds for linux, as this is not supported on OS X
(nixpkgs.lib.attrsets.optionalAttrs pkgs.stdenv.isLinux {
rosenpass-static = pkgs.pkgsStatic.rosenpass;
rosenpass-static-oci-image = pkgs.pkgsStatic.rosenpass-oci-image;
rp-static = pkgs.pkgsStatic.rp;
});
}
))
#
### Linux specifics ###
#
@@ -341,92 +75,53 @@
let
pkgs = import nixpkgs {
inherit system;
# apply our own overlay, overriding/inserting our packages as defined in ./pkgs
overlays = [ self.overlays.default ];
};
packages = self.packages.${system};
in
{
#
### Whitepaper ###
#
packages.whitepaper =
let
tlsetup = (pkgs.texlive.combine {
inherit (pkgs.texlive) scheme-basic acmart amsfonts ccicons
csquotes csvsimple doclicense fancyvrb fontspec gobble
koma-script ifmtarg latexmk lm markdown mathtools minted noto
nunito pgf soul unicode-math lualatex-math paralist
gitinfo2 eso-pic biblatex biblatex-trad biblatex-software
xkeyval xurl xifthen biber;
});
in
pkgs.stdenvNoCC.mkDerivation {
name = "whitepaper";
src = ./papers;
nativeBuildInputs = with pkgs; [
ncurses # tput
python3Packages.pygments
tlsetup # custom tex live scheme
which
];
buildPhase = ''
export HOME=$(mktemp -d)
latexmk -r tex/CI.rc
'';
installPhase = ''
mkdir -p $out
mv *.pdf readme.md $out/
'';
};
#
### Reading materials ###
#
packages.whitepaper = pkgs.whitepaper;
#
### Proof and Proof Tools ###
#
packages.proverif-patched = pkgs.proverif.overrideAttrs (old: {
postInstall = ''
install -D -t $out/lib cryptoverif.pvl
'';
});
packages.proof-proverif = pkgs.stdenv.mkDerivation {
name = "rosenpass-proverif-proof";
version = "unstable";
src = pkgs.lib.sources.sourceByRegex ./. [
"analyze.sh"
"marzipan(/marzipan.awk)?"
"analysis(/.*)?"
];
nativeBuildInputs = [ pkgs.proverif pkgs.graphviz ];
CRYPTOVERIF_LIB = packages.proverif-patched + "/lib/cryptoverif.pvl";
installPhase = ''
mkdir -p $out
bash analyze.sh -color -html $out
'';
};
packages.proverif-patched = pkgs.proverif-patched;
packages.proof-proverif = pkgs.proof-proverif;
#
### Devshells ###
#
devShells.default = pkgs.mkShell {
inherit (packages.proof-proverif) CRYPTOVERIF_LIB;
inputsFrom = [ packages.default ];
inherit (pkgs.proof-proverif) CRYPTOVERIF_LIB;
inputsFrom = [ pkgs.rosenpass ];
nativeBuildInputs = with pkgs; [
inputs.fenix.packages.${system}.complete.toolchain
cmake # override the fakecmake from the main step above
cargo-release
clippy
rustfmt
nodePackages.prettier
nushell # for the .ci/gen-workflow-files.nu script
packages.proverif-patched
proverif-patched
];
};
devShells.coverage = pkgs.mkShell {
inputsFrom = [ packages.default ];
nativeBuildInputs = with pkgs; [ inputs.fenix.packages.${system}.complete.toolchain cargo-llvm-cov ];
inputsFrom = [ pkgs.rosenpass ];
nativeBuildInputs = [
inputs.fenix.packages.${system}.complete.toolchain
pkgs.cargo-llvm-cov
];
};
checks = {
systemd-rosenpass = pkgs.testers.runNixOSTest ./tests/systemd/rosenpass.nix;
systemd-rp = pkgs.testers.runNixOSTest ./tests/systemd/rp.nix;
cargo-fmt = pkgs.runCommand "check-cargo-fmt"
{ inherit (self.devShells.${system}.default) nativeBuildInputs buildInputs; } ''
cargo fmt --manifest-path=${./.}/Cargo.toml --check --all && touch $out

View File

@@ -0,0 +1,13 @@
secret_key = "peer_a.rp.sk"
public_key = "peer_a.rp.pk"
listen = ["[::1]:46127"]
verbosity = "Verbose"
[api]
listen_path = []
listen_fd = []
stream_fd = []
[[peers]]
public_key = "peer_b.rp.pk"
device = "rpPskBrkTestA"

View File

@@ -0,0 +1,14 @@
secret_key = "peer_b.rp.sk"
public_key = "peer_b.rp.pk"
listen = []
verbosity = "Verbose"
[api]
listen_path = []
listen_fd = []
stream_fd = []
[[peers]]
public_key = "peer_a.rp.pk"
endpoint = "[::1]:46127"
device = "rpPskBrkTestB"

View File

@@ -0,0 +1,215 @@
#! /bin/bash
set -e -o pipefail
enquote() {
while (( "$#" > 1)); do
printf "%q " "$1"
shift
done
if (("$#" > 0)); then
printf "%q" "$1"
fi
}
CLEANUP_HOOKS=()
hook_cleanup() {
local hook
set +e +o pipefail
for hook in "${CLEANUP_HOOKS[@]}"; do
eval "${hook}"
done
}
cleanup() {
CLEANUP_HOOKS=("$(enquote exc_with_ctx cleanup "$@")" "${CLEANUP_HOOKS[@]}")
}
cleanup_eval() {
cleanup eval "$*"
}
stderr() {
echo >&2 "$@"
}
log() {
local level; level="$1"; shift || fatal "USAGE: log LVL MESSAGE.."
stderr "[${level}]" "$@"
}
info() {
log "INFO" "$@"
}
debug() {
log "DEBUG" "$@"
}
fatal() {
log "FATAL" "$@"
exit 1
}
assert() {
local msg; msg="$1"; shift || fatal "USAGE: assert_cmd MESSAGE COMMAND.."
"$@" || fatal "${msg}"
}
abs_dir() {
local dir; dir="$1"; shift || fatal "USAGE: abs_dir DIR"
(
cd "${dir}"
pwd -P
)
}
exc_with_ctx() {
local ctx; ctx="$1"; shift || fatal "USAGE: exc_with_ctx CONTEXT COMMAND.."
if [[ -z "${ctx}" ]]; then
info '$' "$@"
else
info "${ctx}\$" "$@"
fi
"$@"
}
exc() {
exc_with_ctx "" "$@"
}
exc_eval() {
exc eval "$*"
}
exc_eval_with_ctx() {
local ctx; ctx="$1"; shift || fatal "USAGE: exc_eval_with_ctx CONTEXT EVAL_COMMAND.."
exc_with_ctx "eval:${ctx}" "$*"
}
exc_as_user() {
exc sudo -u "${SUDO_USER}" "$@"
}
exc_eval_as_user() {
exc_as_user bash -c "$*"
}
fork_eval_as_user() {
exc sudo -u "${SUDO_USER}" bash -c "$*" &
local pid; pid="$!"
cleanup wait "${pid}"
cleanup pkill -2 -P "${pid}" # Reverse ordering
}
info_success() {
stderr
stderr
if [[ "${SUCCESS}" = 1 ]]; then
stderr " Test was a success!"
else
stderr " !!! TEST WAS A FAILURE!!!"
fi
stderr
}
main() {
assert "Use as root with sudo" [ "$(id -u)" -eq 0 ]
assert "Use as root with sudo" [ -n "${SUDO_UID}" ]
assert "SUDO_UID is 0; refusing to build as root" [ "${SUDO_UID}" -ne 0 ]
cleanup info_success
trap hook_cleanup EXIT
SCRIPT="$0"
CFG_TEMPLATE_DIR="$(abs_dir "$(dirname "${SCRIPT}")")"
REPO="$(abs_dir "${CFG_TEMPLATE_DIR}/../..")"
BINS="${REPO}/target/debug"
# Create temp dir
TMP_DIR="/tmp/rosenpass-psk-broker-test-$(date +%s)-$(uuidgen)"
cleanup rm -rf "${TMP_DIR}"
exc_as_user mkdir -p "${TMP_DIR}"
# Copy config
CFG_DIR="${TMP_DIR}/cfg"
exc_as_user cp -R "${CFG_TEMPLATE_DIR}" "${CFG_DIR}"
exc umask 077
exc cd "${REPO}"
local build_cmd; build_cmd=(cargo build --workspace --color=always --all-features --bins --profile dev)
if test -e "${BINS}/rosenpass-wireguard-broker-privileged" -a -e "${BINS}/rosenpass"; then
info "Found the binaries rosenpass-wireguard-broker-privileged and rosenpass." \
"Run following commands as a regular user to recompile the binaries with the right options" \
"in case of an error:" '$' "${build_cmd[@]}"
else
exc_as_user "${build_cmd[@]}"
fi
exc sudo setcap CAP_NET_ADMIN=+eip "${BINS}/rosenpass-wireguard-broker-privileged"
exc cd "${CFG_DIR}"
exc_eval_as_user "wg genkey > peer_a.wg.sk"
exc_eval_as_user "wg pubkey < peer_a.wg.sk > peer_a.wg.pk"
exc_eval_as_user "wg genkey > peer_b.wg.sk"
exc_eval_as_user "wg pubkey < peer_b.wg.sk > peer_b.wg.pk"
exc_eval_as_user "wg genpsk > peer_a_invalid.psk"
exc_eval_as_user "wg genpsk > peer_b_invalid.psk"
exc_eval_as_user "echo $(enquote "peer = \"$(cat peer_b.wg.pk)\"") >> peer_a.rp.config"
exc_eval_as_user "echo $(enquote "peer = \"$(cat peer_a.wg.pk)\"") >> peer_b.rp.config"
exc_as_user "${BINS}"/rosenpass gen-keys peer_a.rp.config
exc_as_user "${BINS}"/rosenpass gen-keys peer_b.rp.config
cleanup ip l del dev rpPskBrkTestA
cleanup ip l del dev rpPskBrkTestB
exc ip l add dev rpPskBrkTestA type wireguard
exc ip l add dev rpPskBrkTestB type wireguard
exc wg set rpPskBrkTestA \
listen-port 46125 \
private-key peer_a.wg.sk \
peer "$(cat peer_b.wg.pk)" \
endpoint 'localhost:46126' \
preshared-key peer_a_invalid.psk \
allowed-ips fe80::2/64
exc wg set rpPskBrkTestB \
listen-port 46126 \
private-key peer_b.wg.sk \
peer "$(cat peer_a.wg.pk)" \
endpoint 'localhost:46125' \
preshared-key peer_b_invalid.psk \
allowed-ips fe80::1/64
exc ip l set rpPskBrkTestA up
exc ip l set rpPskBrkTestB up
exc ip a add fe80::1/64 dev rpPskBrkTestA
exc ip a add fe80::2/64 dev rpPskBrkTestB
fork_eval_as_user "\
RUST_LOG='info' \
PATH=$(enquote "${REPO}/target/debug:${PATH}") \
$(enquote "${BINS}/rosenpass") --psk-broker-spawn \
exchange-config peer_a.rp.config"
fork_eval_as_user "\
RUST_LOG='info' \
PATH=$(enquote "${REPO}/target/debug:${PATH}") \
$(enquote "${BINS}/rosenpass-wireguard-broker-socket-handler") \
--listen-path broker.sock"
fork_eval_as_user "\
RUST_LOG='info' \
PATH=$(enquote "$PWD/target/debug:${PATH}") \
$(enquote "${BINS}/rosenpass") --psk-broker-path broker.sock \
exchange-config peer_b.rp.config"
exc_as_user ping -c 2 -w 10 fe80::1%rpPskBrkTestA
exc_as_user ping -c 2 -w 10 fe80::2%rpPskBrkTestB
exc_as_user ping -c 2 -w 10 fe80::2%rpPskBrkTestA
exc_as_user ping -c 2 -w 10 fe80::1%rpPskBrkTestB
SUCCESS=1
}
main "$@"

View File

@@ -14,3 +14,7 @@ rosenpass-cipher-traits = { workspace = true }
rosenpass-util = { workspace = true }
oqs-sys = { workspace = true }
paste = { workspace = true }
[dev-dependencies]
rosenpass-secret-memory = { workspace = true }
rosenpass-constant-time = { workspace = true }

View File

@@ -1,9 +1,42 @@
//! Generic helpers for declaring bindings to liboqs kems
/// Generate bindings to a liboqs-provided KEM
macro_rules! oqs_kem {
($name:ident) => { ::paste::paste!{
#[doc = "Bindings for ::oqs_sys::kem::" [<"OQS_KEM" _ $name:snake>] "_*"]
mod [< $name:snake >] {
use rosenpass_cipher_traits::Kem;
use rosenpass_util::result::Guaranteed;
#[doc = "Bindings for ::oqs_sys::kem::" [<"OQS_KEM" _ $name:snake>] "_*"]
#[doc = ""]
#[doc = "# Examples"]
#[doc = ""]
#[doc = "```rust"]
#[doc = "use std::borrow::{Borrow, BorrowMut};"]
#[doc = "use rosenpass_cipher_traits::Kem;"]
#[doc = "use rosenpass_oqs::" $name:camel " as MyKem;"]
#[doc = "use rosenpass_secret_memory::{Secret, Public};"]
#[doc = ""]
#[doc = "rosenpass_secret_memory::secret_policy_try_use_memfd_secrets();"]
#[doc = ""]
#[doc = "// Recipient generates secret key, transfers pk to sender"]
#[doc = "let mut sk = Secret::<{ MyKem::SK_LEN }>::zero();"]
#[doc = "let mut pk = Public::<{ MyKem::PK_LEN }>::zero();"]
#[doc = "MyKem::keygen(sk.secret_mut(), pk.borrow_mut());"]
#[doc = ""]
#[doc = "// Sender generates ciphertext and local shared key, sends ciphertext to recipient"]
#[doc = "let mut shk_enc = Secret::<{ MyKem::SHK_LEN }>::zero();"]
#[doc = "let mut ct = Public::<{ MyKem::CT_LEN }>::zero();"]
#[doc = "MyKem::encaps(shk_enc.secret_mut(), ct.borrow_mut(), pk.borrow());"]
#[doc = ""]
#[doc = "// Recipient decapsulates ciphertext"]
#[doc = "let mut shk_dec = Secret::<{ MyKem::SHK_LEN }>::zero();"]
#[doc = "MyKem::decaps(shk_dec.secret_mut(), sk.secret(), ct.borrow());"]
#[doc = ""]
#[doc = "// Both parties end up with the same shared key"]
#[doc = "assert!(rosenpass_constant_time::compare(shk_enc.secret_mut(), shk_dec.secret_mut()) == 0);"]
#[doc = "```"]
pub enum [< $name:camel >] {}
/// # Panic & Safety

View File

@@ -1,3 +1,8 @@
#![warn(missing_docs)]
#![warn(clippy::missing_docs_in_private_items)]
//! Bindings for liboqs used in Rosenpass
/// Call into a libOQS function
macro_rules! oqs_call {
($name:path, $($args:expr),*) => {{
use oqs_sys::common::OQS_STATUS::*;

39
overlay.nix Normal file
View File

@@ -0,0 +1,39 @@
final: prev: {
#
### Actual rosenpass software ###
#
rosenpass = final.callPackage ./pkgs/rosenpass.nix { };
rosenpass-oci-image = final.callPackage ./pkgs/rosenpass-oci-image.nix { };
rp = final.callPackage ./pkgs/rosenpass.nix { package = "rp"; };
release-package = final.callPackage ./pkgs/release-package.nix { };
#
### Appendix ###
#
proverif-patched = prev.proverif.overrideAttrs (old: {
postInstall = ''
install -D -t $out/lib cryptoverif.pvl
'';
});
proof-proverif = final.stdenv.mkDerivation {
name = "rosenpass-proverif-proof";
version = "unstable";
src = final.lib.sources.sourceByRegex ./. [
"analyze.sh"
"marzipan(/marzipan.awk)?"
"analysis(/.*)?"
];
nativeBuildInputs = [ final.proverif final.graphviz ];
CRYPTOVERIF_LIB = final.proverif-patched + "/lib/cryptoverif.pvl";
installPhase = ''
mkdir -p $out
bash analyze.sh -color -html $out
'';
};
whitepaper = final.callPackage ./pkgs/whitepaper.nix { };
}

27
pkgs/release-package.nix Normal file
View File

@@ -0,0 +1,27 @@
{ lib, stdenvNoCC, runCommandNoCC, pkgsStatic, rosenpass, rosenpass-oci-image, rp } @ args:
let
version = rosenpass.version;
# select static packages on Linux, default packages otherwise
package =
if stdenvNoCC.hostPlatform.isLinux then
pkgsStatic.rosenpass
else args.rosenpass;
rp =
if stdenvNoCC.hostPlatform.isLinux then
pkgsStatic.rp
else args.rp;
oci-image =
if stdenvNoCC.hostPlatform.isLinux then
pkgsStatic.rosenpass-oci-image
else args.rosenpass-oci-image;
in
runCommandNoCC "lace-result" { } ''
mkdir {bin,$out}
tar -cvf $out/rosenpass-${stdenvNoCC.hostPlatform.system}-${version}.tar \
-C ${package} bin/rosenpass lib/systemd \
-C ${rp} bin/rp
cp ${oci-image} \
$out/rosenpass-oci-image-${stdenvNoCC.hostPlatform.system}-${version}.tar.gz
''

View File

@@ -0,0 +1,11 @@
{ dockerTools, buildEnv, rosenpass }:
dockerTools.buildImage {
name = rosenpass.name + "-oci";
copyToRoot = buildEnv {
name = "image-root";
paths = [ rosenpass ];
pathsToLink = [ "/bin" ];
};
config.Cmd = [ "/bin/rosenpass" ];
}

87
pkgs/rosenpass.nix Normal file
View File

@@ -0,0 +1,87 @@
{ lib, stdenv, rustPlatform, cmake, mandoc, removeReferencesTo, bash, package ? "rosenpass" }:
let
# whether we want to build a statically linked binary
isStatic = stdenv.targetPlatform.isStatic;
scoped = (scope: scope.result);
# source files relevant for rust
src = scoped rec {
# File suffices to include
extensions = [
"lock"
"rs"
"service"
"target"
"toml"
];
# Files to explicitly include
files = [
"to/README.md"
];
src = ../.;
filter = (path: type: scoped rec {
inherit (lib) any id removePrefix hasSuffix;
anyof = (any id);
basename = baseNameOf (toString path);
relative = removePrefix (toString src + "/") (toString path);
result = anyof [
(type == "directory")
(any (ext: hasSuffix ".${ext}" basename) extensions)
(any (file: file == relative) files)
];
});
result = lib.sources.cleanSourceWith { inherit src filter; };
};
# parsed Cargo.toml
cargoToml = builtins.fromTOML (builtins.readFile (src + "/rosenpass/Cargo.toml"));
in
rustPlatform.buildRustPackage {
name = cargoToml.package.name;
version = cargoToml.package.version;
inherit src;
cargoBuildOptions = [ "--package" package ];
cargoTestOptions = [ "--package" package ];
doCheck = true;
cargoLock = {
lockFile = src + "/Cargo.lock";
outputHashes = {
"memsec-0.6.3" = "sha256-4ri+IEqLd77cLcul3lZrmpDKj4cwuYJ8oPRAiQNGeLw=";
"uds-0.4.2" = "sha256-qlxr/iJt2AV4WryePIvqm/8/MK/iqtzegztNliR93W8=";
};
};
nativeBuildInputs = [
stdenv.cc
cmake # for oqs build in the oqs-sys crate
mandoc # for the built-in manual
removeReferencesTo
rustPlatform.bindgenHook # for C-bindings in the crypto libs
];
buildInputs = [ bash ];
hardeningDisable = lib.optional isStatic "fortify";
postInstall = ''
mkdir -p $out/lib/systemd/system
install systemd/rosenpass@.service $out/lib/systemd/system
install systemd/rp@.service $out/lib/systemd/system
install systemd/rosenpass.target $out/lib/systemd/system
'';
meta = {
inherit (cargoToml.package) description homepage;
license = with lib.licenses; [ mit asl20 ];
maintainers = [ lib.maintainers.wucke13 ];
platforms = lib.platforms.all;
};
}

29
pkgs/whitepaper.nix Normal file
View File

@@ -0,0 +1,29 @@
{ stdenvNoCC, texlive, ncurses, python3Packages, which }:
let
customTexLiveSetup = (texlive.combine {
inherit (texlive) acmart amsfonts biber biblatex biblatex-software
biblatex-trad ccicons csquotes csvsimple doclicense eso-pic fancyvrb
fontspec gitinfo2 gobble ifmtarg koma-script latexmk lm lualatex-math
markdown mathtools minted noto nunito paralist pgf scheme-basic soul
unicode-math upquote xifthen xkeyval xurl;
});
in
stdenvNoCC.mkDerivation {
name = "whitepaper";
src = ../papers;
nativeBuildInputs = [
ncurses # tput
python3Packages.pygments
customTexLiveSetup # custom tex live scheme
which
];
buildPhase = ''
export HOME=$(mktemp -d)
latexmk -r tex/CI.rc
'';
installPhase = ''
mkdir -p $out
mv *.pdf readme.md $out/
'';
}

View File

@@ -1,6 +1,6 @@
[package]
name = "rosenpass"
version = "0.2.1"
version = "0.3.0-dev"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
edition = "2021"
license = "MIT OR Apache-2.0"
@@ -22,6 +22,10 @@ required-features = ["experiment_api", "internal_bin_gen_ipc_msg_types"]
name = "api-integration-tests"
required-features = ["experiment_api", "internal_testing"]
[[test]]
name = "api-integration-tests-api-setup"
required-features = ["experiment_api", "internal_testing"]
[[bench]]
name = "handshake"
harness = false
@@ -43,16 +47,21 @@ env_logger = { workspace = true }
serde = { workspace = true }
toml = { workspace = true }
clap = { workspace = true }
clap_complete = { workspace = true }
clap_mangen = { workspace = true }
mio = { workspace = true }
rand = { workspace = true }
zerocopy = { workspace = true }
home = { workspace = true }
derive_builder = {workspace = true}
rosenpass-wireguard-broker = {workspace = true}
derive_builder = { workspace = true }
rosenpass-wireguard-broker = { workspace = true }
zeroize = { workspace = true }
hex-literal = { workspace = true, optional = true }
hex = { workspace = true, optional = true }
heck = { workspace = true, optional = true }
command-fds = { workspace = true, optional = true }
rustix = { workspace = true, optional = true }
uds = { workspace = true, optional = true, features = ["mio_1xx"] }
[build-dependencies]
anyhow = { workspace = true }
@@ -61,14 +70,22 @@ anyhow = { workspace = true }
criterion = { workspace = true }
test_bin = { workspace = true }
stacker = { workspace = true }
serial_test = {workspace = true}
procspawn = {workspace = true}
serial_test = { workspace = true }
procspawn = { workspace = true }
tempfile = { workspace = true }
rustix = { workspace = true }
[features]
enable_broker_api = ["rosenpass-wireguard-broker/enable_broker_api"]
experiment_memfd_secret = []
default = []
experiment_memfd_secret = ["rosenpass-wireguard-broker/experiment_memfd_secret"]
experiment_libcrux = ["rosenpass-ciphers/experiment_libcrux"]
experiment_api = ["hex-literal"]
internal_testing = []
experiment_api = [
"hex-literal",
"uds",
"command-fds",
"rustix",
"rosenpass-util/experiment_file_descriptor_passing",
"rosenpass-wireguard-broker/experiment_api",
]
internal_testing = []
internal_bin_gen_ipc_msg_types = ["hex", "heck"]

View File

@@ -1,52 +0,0 @@
use anyhow::bail;
use anyhow::Result;
use std::env;
use std::fs::File;
use std::io::Write;
use std::path::PathBuf;
use std::process::Command;
/// Invokes a troff compiler to compile a manual page
fn render_man(compiler: &str, man: &str) -> Result<String> {
let out = Command::new(compiler).args(["-Tascii", man]).output()?;
if !out.status.success() {
bail!("{} returned an error", compiler);
}
Ok(String::from_utf8(out.stdout)?)
}
/// Generates the manual page
fn generate_man() -> String {
// This function is purposely stupid and redundant
let man = render_man("mandoc", "./doc/rosenpass.1");
if let Ok(man) = man {
return man;
}
let man = render_man("groff", "./doc/rosenpass.1");
if let Ok(man) = man {
return man;
}
"Cannot render manual page. Please visit https://rosenpass.eu/docs/manuals/\n".into()
}
fn man() {
let out_dir = PathBuf::from(env::var("OUT_DIR").unwrap());
let man = generate_man();
let path = out_dir.join("rosenpass.1.ascii");
let mut file = File::create(&path).unwrap();
file.write_all(man.as_bytes()).unwrap();
println!("cargo:rustc-env=ROSENPASS_MAN={}", path.display());
}
fn main() {
// For now, rerun the build script on every time, as the build script
// is not very expensive right now.
println!("cargo:rerun-if-changed=./");
man();
}

View File

@@ -0,0 +1,341 @@
// Note: This is business logic; tested through the integration tests in
// rosenpass/tests/
use std::{borrow::BorrowMut, collections::VecDeque, os::fd::OwnedFd};
use anyhow::Context;
use rosenpass_to::{ops::copy_slice, To};
use rosenpass_util::{
fd::FdIo,
functional::{run, ApplyExt},
io::ReadExt,
mem::DiscardResultExt,
mio::UnixStreamExt,
result::OkExt,
};
use rosenpass_wireguard_broker::brokers::mio_client::MioBrokerClient;
use crate::{
api::{add_listen_socket_response_status, add_psk_broker_response_status},
app_server::AppServer,
protocol::BuildCryptoServer,
};
use super::{supply_keypair_response_status, Server as ApiServer};
/// Stores the state of the API handler.
///
/// This is used in the context [ApiHandlerContext]; [ApiHandlerContext] exposes both
/// the [AppServer] and the API handler state.
///
/// [ApiHandlerContext] is what actually contains the API handler functions.
#[derive(Debug)]
pub struct ApiHandler {
_dummy: (),
}
impl ApiHandler {
/// Construct an [Self]
#[allow(clippy::new_without_default)]
pub fn new() -> Self {
Self { _dummy: () }
}
}
/// The implementation of the API requires both access to its own state [ApiHandler] and to the
/// [AppServer] the API is supposed to operate on.
///
/// This trait provides both; it implements a pattern to allow for multiple - **potentially
/// overlapping** mutable references to be passed to the API handler functions.
///
/// This relatively complex scheme is chosen to appease the borrow checker: We want flexibility
/// with regard to where the [ApiHandler] is stored and we need a mutable reference to
/// [ApiHandler]. We also need a mutable reference to [AppServer]. Achieving this by using the
/// direct method would be impossible because the [ApiHandler] is actually stored somewhere inside
/// [AppServer]. The borrow checker does not allow this.
///
/// What we have instead is in practice a reference to [AppServer] and a function (as part of
/// the trait) that extracts an [ApiHandler] reference from [AppServer], which is allowed by the
/// borrow checker. A benefit of the use of a trait here is that we could, if desired, also store
/// the [ApiHandler] outside [AppServer]. It really depends on the trait.
pub trait ApiHandlerContext {
/// Retrieve the [ApiHandler]
fn api_handler(&self) -> &ApiHandler;
/// Retrieve the [AppServer]
fn app_server(&self) -> &AppServer;
/// Retrieve the [ApiHandler]
fn api_handler_mut(&mut self) -> &mut ApiHandler;
/// Retrieve the [AppServer]
fn app_server_mut(&mut self) -> &mut AppServer;
}
/// This is the Error raised by [ApiServer::supply_keypair]; it contains both
/// the underlying error message as well as the status value
/// returned by the API.
///
/// [ApiServer::supply_keypair] generally constructs a [Self] by using one of the
/// utility functions [SupplyKeypairErrorExt].
#[derive(thiserror::Error, Debug)]
#[error("Error in SupplyKeypair")]
struct SupplyKeypairError {
/// The status code communicated via the Rosenpass API
status: u128,
/// The underlying error that caused the Rosenpass API level Error
#[source]
cause: anyhow::Error,
}
trait SupplyKeypairErrorExt<T> {
/// Imbue any Error (that can be represented as [anyhow::Error]) with
/// an arbitrary error code
fn e_custom(self, status: u128) -> Result<T, SupplyKeypairError>;
/// Imbue any Error (that can be represented as [anyhow::Error]) with
/// the [supply_keypair_response_status::INTERNAL_ERROR] error code
fn einternal(self) -> Result<T, SupplyKeypairError>;
/// Imbue any Error (that can be represented as [anyhow::Error]) with
/// the [supply_keypair_response_status::KEYPAIR_ALREADY_SUPPLIED] error code
fn ealready_supplied(self) -> Result<T, SupplyKeypairError>;
/// Imbue any Error (that can be represented as [anyhow::Error]) with
/// the [supply_keypair_response_status::INVALID_REQUEST] error code
fn einvalid_req(self) -> Result<T, SupplyKeypairError>;
}
impl<T, E: Into<anyhow::Error>> SupplyKeypairErrorExt<T> for Result<T, E> {
fn e_custom(self, status: u128) -> Result<T, SupplyKeypairError> {
self.map_err(|e| SupplyKeypairError {
status,
cause: e.into(),
})
}
fn einternal(self) -> Result<T, SupplyKeypairError> {
self.e_custom(supply_keypair_response_status::INTERNAL_ERROR)
}
fn ealready_supplied(self) -> Result<T, SupplyKeypairError> {
self.e_custom(supply_keypair_response_status::KEYPAIR_ALREADY_SUPPLIED)
}
fn einvalid_req(self) -> Result<T, SupplyKeypairError> {
self.e_custom(supply_keypair_response_status::INVALID_REQUEST)
}
}
impl<T> ApiServer for T
where
T: ?Sized + ApiHandlerContext,
{
fn ping(
&mut self,
req: &super::PingRequest,
_req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::PingResponse,
) -> anyhow::Result<()> {
let (req, res) = (&req.payload, &mut res.payload);
copy_slice(&req.echo).to(&mut res.echo);
Ok(())
}
fn supply_keypair(
&mut self,
req: &super::SupplyKeypairRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::SupplyKeypairResponse,
) -> anyhow::Result<()> {
let outcome: Result<(), SupplyKeypairError> = run(|| {
// Acquire the file descriptors
let mut sk_io = FdIo(
req_fds
.front()
.context("First file descriptor, secret key, missing.")
.einvalid_req()?,
);
let mut pk_io = FdIo(
req_fds
.get(1)
.context("Second file descriptor, public key, missing.")
.einvalid_req()?,
);
// Actually read the secrets
let mut sk = crate::protocol::SSk::zero();
sk_io.read_exact_til_end(sk.secret_mut()).einvalid_req()?;
let mut pk = crate::protocol::SPk::zero();
pk_io.read_exact_til_end(pk.borrow_mut()).einvalid_req()?;
// Retrieve the construction site
let construction_site = self.app_server_mut().crypto_site.borrow_mut();
// Retrieve the builder
use rosenpass_util::build::ConstructionSite as C;
let maybe_builder = match construction_site {
C::Builder(builder) => Some(builder),
C::Product(_) => None,
C::Void => {
return Err(anyhow::Error::msg("CryptoServer construction side is void"))
.einternal();
}
};
// Retrieve a reference to the keypair
let Some(BuildCryptoServer {
ref mut keypair, ..
}) = maybe_builder
else {
return Err(anyhow::Error::msg("CryptoServer already built")).ealready_supplied();
};
// Supply the keypair to the CryptoServer
keypair
.insert(crate::protocol::Keypair { sk, pk })
.discard_result();
// Actually construct the CryptoServer
construction_site
.erect()
.map_err(|e| anyhow::Error::msg(format!("Error erecting the CryptoServer {e:?}")))
.einternal()?;
Ok(())
});
// Handle errors
use supply_keypair_response_status as status;
let status = match outcome {
Ok(()) => status::OK,
Err(e) => {
let lvl = match e.status {
status::INTERNAL_ERROR => log::Level::Warn,
_ => log::Level::Debug,
};
log::log!(
lvl,
"Error while processing API Request.\n Request: {:?}\n Error: {:?}",
req,
e.cause
);
if e.status == status::INTERNAL_ERROR {
return Err(e.cause);
}
e.status
}
};
res.payload.status = status;
Ok(())
}
fn add_listen_socket(
&mut self,
_req: &super::boilerplate::AddListenSocketRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::boilerplate::AddListenSocketResponse,
) -> anyhow::Result<()> {
// Retrieve file descriptor
let sock_res = run(|| -> anyhow::Result<mio::net::UdpSocket> {
let sock = req_fds
.pop_front()
.context("Invalid request socket missing.")?;
// TODO: We need to have this outside linux
#[cfg(target_os = "linux")]
rosenpass_util::fd::GetSocketProtocol::demand_udp_socket(&sock)?;
let sock = std::net::UdpSocket::from(sock);
sock.set_nonblocking(true)?;
mio::net::UdpSocket::from_std(sock).ok()
});
let sock = match sock_res {
Ok(sock) => sock,
Err(e) => {
log::debug!("Error processing AddListenSocket API request: {e:?}");
res.payload.status = add_listen_socket_response_status::INVALID_REQUEST;
return Ok(());
}
};
// Register socket
let reg_result = self.app_server_mut().register_listen_socket(sock);
if let Err(internal_error) = reg_result {
log::warn!("Internal error processing AddListenSocket API request: {internal_error:?}");
res.payload.status = add_listen_socket_response_status::INTERNAL_ERROR;
return Ok(());
};
res.payload.status = add_listen_socket_response_status::OK;
Ok(())
}
fn add_psk_broker(
&mut self,
_req: &super::boilerplate::AddPskBrokerRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::boilerplate::AddPskBrokerResponse,
) -> anyhow::Result<()> {
// Retrieve file descriptor
let sock_res = run(|| {
let sock = req_fds
.pop_front()
.context("Invalid request socket missing.")?;
mio::net::UnixStream::from_fd(sock)
});
// Handle errors
let sock = match sock_res {
Ok(sock) => sock,
Err(e) => {
log::debug!(
"Request found to be invalid while processing AddPskBroker API request: {e:?}"
);
res.payload.status = add_psk_broker_response_status::INVALID_REQUEST;
return Ok(());
}
};
// Register Socket
let client = Box::new(MioBrokerClient::new(sock));
// Workaround: The broker code is currently impressively overcomplicated. Brokers are
// stored in a hash map but the hash map key used is just a counter so a vector could
// have been used. Broker configuration is abstracted, different peers can have different
// brokers but there is no facility to add multiple brokers in practice. The broker index
// uses a `Public` wrapper without actually holding any cryptographic data. Even the broker
// configuration uses a trait abstraction for no discernible reason and a lot of the code
// introduces pointless, single-field wrapper structs.
// We should use an implement-what-is-actually-needed strategy next time.
// The Broker code needs to be slimmed down, the right direction to go is probably to
// just add event and capability support to the API and use the API to deliver OSK events.
//
// For now, we just replace the latest broker.
let erase_ptr = {
use crate::app_server::BrokerStorePtr;
//
use rosenpass_secret_memory::Public;
use zerocopy::AsBytes;
(self.app_server().brokers.store.len() - 1)
.apply(|x| x as u64)
.apply(|x| Public::from_slice(x.as_bytes()))
.apply(BrokerStorePtr)
};
let register_result = run(|| {
let srv = self.app_server_mut();
srv.unregister_broker(erase_ptr)?;
srv.register_broker(client)
});
if let Err(e) = register_result {
log::warn!("Internal error while processing AddPskBroker API request: {e:?}");
res.payload.status = add_psk_broker_response_status::INTERNAL_ERROR;
return Ok(());
}
res.payload.status = add_psk_broker_response_status::OK;
Ok(())
}
}

View File

@@ -4,7 +4,7 @@ use rosenpass_util::zerocopy::{RefMaker, ZerocopySliceExt};
use super::{
PingRequest, PingResponse, RawMsgType, RefMakerRawMsgTypeExt, RequestMsgType, RequestRef,
ResponseMsgType, ResponseRef,
ResponseMsgType, ResponseRef, SupplyKeypairRequest, SupplyKeypairResponse,
};
pub trait ByteSliceRefExt: ByteSlice {
@@ -111,6 +111,112 @@ pub trait ByteSliceRefExt: ByteSlice {
fn ping_response_from_suffix(self) -> anyhow::Result<Ref<Self, PingResponse>> {
self.zk_parse_suffix()
}
fn supply_keypair_request(self) -> anyhow::Result<Ref<Self, SupplyKeypairRequest>> {
self.zk_parse()
}
fn supply_keypair_request_from_prefix(self) -> anyhow::Result<Ref<Self, SupplyKeypairRequest>> {
self.zk_parse_prefix()
}
fn supply_keypair_request_from_suffix(self) -> anyhow::Result<Ref<Self, SupplyKeypairRequest>> {
self.zk_parse_suffix()
}
fn supply_keypair_response_maker(self) -> RefMaker<Self, SupplyKeypairResponse> {
self.zk_ref_maker()
}
fn supply_keypair_response(self) -> anyhow::Result<Ref<Self, SupplyKeypairResponse>> {
self.zk_parse()
}
fn supply_keypair_response_from_prefix(
self,
) -> anyhow::Result<Ref<Self, SupplyKeypairResponse>> {
self.zk_parse_prefix()
}
fn supply_keypair_response_from_suffix(
self,
) -> anyhow::Result<Ref<Self, SupplyKeypairResponse>> {
self.zk_parse_suffix()
}
fn add_listen_socket_request(self) -> anyhow::Result<Ref<Self, super::AddListenSocketRequest>> {
self.zk_parse()
}
fn add_listen_socket_request_from_prefix(
self,
) -> anyhow::Result<Ref<Self, super::AddListenSocketRequest>> {
self.zk_parse_prefix()
}
fn add_listen_socket_request_from_suffix(
self,
) -> anyhow::Result<Ref<Self, super::AddListenSocketRequest>> {
self.zk_parse_suffix()
}
fn add_listen_socket_response_maker(self) -> RefMaker<Self, super::AddListenSocketResponse> {
self.zk_ref_maker()
}
fn add_listen_socket_response(
self,
) -> anyhow::Result<Ref<Self, super::AddListenSocketResponse>> {
self.zk_parse()
}
fn add_listen_socket_response_from_prefix(
self,
) -> anyhow::Result<Ref<Self, super::AddListenSocketResponse>> {
self.zk_parse_prefix()
}
fn add_listen_socket_response_from_suffix(
self,
) -> anyhow::Result<Ref<Self, super::AddListenSocketResponse>> {
self.zk_parse_suffix()
}
fn add_psk_broker_request(self) -> anyhow::Result<Ref<Self, super::AddPskBrokerRequest>> {
self.zk_parse()
}
fn add_psk_broker_request_from_prefix(
self,
) -> anyhow::Result<Ref<Self, super::AddPskBrokerRequest>> {
self.zk_parse_prefix()
}
fn add_psk_broker_request_from_suffix(
self,
) -> anyhow::Result<Ref<Self, super::AddPskBrokerRequest>> {
self.zk_parse_suffix()
}
fn add_psk_broker_response_maker(self) -> RefMaker<Self, super::AddPskBrokerResponse> {
self.zk_ref_maker()
}
fn add_psk_broker_response(self) -> anyhow::Result<Ref<Self, super::AddPskBrokerResponse>> {
self.zk_parse()
}
fn add_psk_broker_response_from_prefix(
self,
) -> anyhow::Result<Ref<Self, super::AddPskBrokerResponse>> {
self.zk_parse_prefix()
}
fn add_psk_broker_response_from_suffix(
self,
) -> anyhow::Result<Ref<Self, super::AddPskBrokerResponse>> {
self.zk_parse_suffix()
}
}
impl<B: ByteSlice> ByteSliceRefExt for B {}

View File

@@ -14,6 +14,27 @@ pub const PING_REQUEST: RawMsgType =
pub const PING_RESPONSE: RawMsgType =
RawMsgType::from_le_bytes(hex!("4ec7 f6f0 2bbc ba64 48f1 da14 c7cf 0260"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Supply Keypair Request
const SUPPLY_KEYPAIR_REQUEST: RawMsgType =
RawMsgType::from_le_bytes(hex!("ac91 a5a6 4f4b 21d0 ac7f 9b55 74f7 3529"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Supply Keypair Response
const SUPPLY_KEYPAIR_RESPONSE: RawMsgType =
RawMsgType::from_le_bytes(hex!("f2dc 49bd e261 5f10 40b7 3c16 ec61 edb9"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Add Listen Socket Request
const ADD_LISTEN_SOCKET_REQUEST: RawMsgType =
RawMsgType::from_le_bytes(hex!("3f21 434f 87cc a08c 02c4 61e4 0816 c7da"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Add Listen Socket Response
const ADD_LISTEN_SOCKET_RESPONSE: RawMsgType =
RawMsgType::from_le_bytes(hex!("45d5 0f0d 93f0 6105 98f2 9469 5dfd 5f36"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Add Psk Broker Request
const ADD_PSK_BROKER_REQUEST: RawMsgType =
RawMsgType::from_le_bytes(hex!("d798 b8dc bd61 5cab 8df1 c63d e4eb a2d1"));
// hash domain hash of: Rosenpass IPC API -> Rosenpass Protocol Server -> Add Psk Broker Response
const ADD_PSK_BROKER_RESPONSE: RawMsgType =
RawMsgType::from_le_bytes(hex!("bd25 e418 ffb0 6930 248b 217e 2fae e353"));
pub trait MessageAttributes {
fn message_size(&self) -> usize;
}
@@ -21,17 +42,26 @@ pub trait MessageAttributes {
#[derive(Hash, PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)]
pub enum RequestMsgType {
Ping,
SupplyKeypair,
AddListenSocket,
AddPskBroker,
}
#[derive(Hash, PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)]
pub enum ResponseMsgType {
Ping,
SupplyKeypair,
AddListenSocket,
AddPskBroker,
}
impl MessageAttributes for RequestMsgType {
fn message_size(&self) -> usize {
match self {
Self::Ping => std::mem::size_of::<super::PingRequest>(),
Self::SupplyKeypair => std::mem::size_of::<super::SupplyKeypairRequest>(),
Self::AddListenSocket => std::mem::size_of::<super::AddListenSocketRequest>(),
Self::AddPskBroker => std::mem::size_of::<super::AddPskBrokerRequest>(),
}
}
}
@@ -40,6 +70,9 @@ impl MessageAttributes for ResponseMsgType {
fn message_size(&self) -> usize {
match self {
Self::Ping => std::mem::size_of::<super::PingResponse>(),
Self::SupplyKeypair => std::mem::size_of::<super::SupplyKeypairResponse>(),
Self::AddListenSocket => std::mem::size_of::<super::AddListenSocketResponse>(),
Self::AddPskBroker => std::mem::size_of::<super::AddPskBrokerResponse>(),
}
}
}
@@ -51,6 +84,9 @@ impl TryFrom<RawMsgType> for RequestMsgType {
use RequestMsgType as E;
Ok(match value {
self::PING_REQUEST => E::Ping,
self::SUPPLY_KEYPAIR_REQUEST => E::SupplyKeypair,
self::ADD_LISTEN_SOCKET_REQUEST => E::AddListenSocket,
self::ADD_PSK_BROKER_REQUEST => E::AddPskBroker,
_ => return Err(InvalidApiMessageType(value)),
})
}
@@ -61,6 +97,9 @@ impl From<RequestMsgType> for RawMsgType {
use RequestMsgType as E;
match val {
E::Ping => self::PING_REQUEST,
E::SupplyKeypair => self::SUPPLY_KEYPAIR_REQUEST,
E::AddListenSocket => self::ADD_LISTEN_SOCKET_REQUEST,
E::AddPskBroker => self::ADD_PSK_BROKER_REQUEST,
}
}
}
@@ -72,6 +111,9 @@ impl TryFrom<RawMsgType> for ResponseMsgType {
use ResponseMsgType as E;
Ok(match value {
self::PING_RESPONSE => E::Ping,
self::SUPPLY_KEYPAIR_RESPONSE => E::SupplyKeypair,
self::ADD_LISTEN_SOCKET_RESPONSE => E::AddListenSocket,
self::ADD_PSK_BROKER_RESPONSE => E::AddPskBroker,
_ => return Err(InvalidApiMessageType(value)),
})
}
@@ -82,6 +124,9 @@ impl From<ResponseMsgType> for RawMsgType {
use ResponseMsgType as E;
match val {
E::Ping => self::PING_RESPONSE,
E::SupplyKeypair => self::SUPPLY_KEYPAIR_RESPONSE,
E::AddListenSocket => self::ADD_LISTEN_SOCKET_RESPONSE,
E::AddPskBroker => self::ADD_PSK_BROKER_RESPONSE,
}
}
}

View File

@@ -6,6 +6,7 @@ use super::{Message, RawMsgType, RequestMsgType, ResponseMsgType};
/// Size required to fit any message in binary form
pub const MAX_REQUEST_LEN: usize = 2500; // TODO fix this
pub const MAX_RESPONSE_LEN: usize = 2500; // TODO fix this
pub const MAX_REQUEST_FDS: usize = 2;
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
@@ -94,3 +95,259 @@ impl Message for PingResponse {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct SupplyKeypairRequestPayload {}
pub type SupplyKeypairRequest = RequestEnvelope<SupplyKeypairRequestPayload>;
impl Default for SupplyKeypairRequest {
fn default() -> Self {
Self::new()
}
}
impl SupplyKeypairRequest {
pub fn new() -> Self {
Self::from_payload(SupplyKeypairRequestPayload {})
}
}
impl Message for SupplyKeypairRequest {
type Payload = SupplyKeypairRequestPayload;
type MessageClass = RequestMsgType;
const MESSAGE_TYPE: Self::MessageClass = RequestMsgType::SupplyKeypair;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
pub mod supply_keypair_response_status {
pub const OK: u128 = 0;
pub const KEYPAIR_ALREADY_SUPPLIED: u128 = 1;
// TODO: This is not actually part of the API. Remove.
pub const INTERNAL_ERROR: u128 = 2;
pub const INVALID_REQUEST: u128 = 3;
/// TODO: Deprectaed, remove
pub const IO_ERROR: u128 = 4;
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct SupplyKeypairResponsePayload {
pub status: u128,
}
pub type SupplyKeypairResponse = ResponseEnvelope<SupplyKeypairResponsePayload>;
impl SupplyKeypairResponse {
pub fn new(status: u128) -> Self {
Self::from_payload(SupplyKeypairResponsePayload { status })
}
}
impl Message for SupplyKeypairResponse {
type Payload = SupplyKeypairResponsePayload;
type MessageClass = ResponseMsgType;
const MESSAGE_TYPE: Self::MessageClass = ResponseMsgType::SupplyKeypair;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct AddListenSocketRequestPayload {}
pub type AddListenSocketRequest = RequestEnvelope<AddListenSocketRequestPayload>;
impl Default for AddListenSocketRequest {
fn default() -> Self {
Self::new()
}
}
impl AddListenSocketRequest {
pub fn new() -> Self {
Self::from_payload(AddListenSocketRequestPayload {})
}
}
impl Message for AddListenSocketRequest {
type Payload = AddListenSocketRequestPayload;
type MessageClass = RequestMsgType;
const MESSAGE_TYPE: Self::MessageClass = RequestMsgType::AddListenSocket;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
pub mod add_listen_socket_response_status {
pub const OK: u128 = 0;
pub const INVALID_REQUEST: u128 = 1;
pub const INTERNAL_ERROR: u128 = 2;
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct AddListenSocketResponsePayload {
pub status: u128,
}
pub type AddListenSocketResponse = ResponseEnvelope<AddListenSocketResponsePayload>;
impl AddListenSocketResponse {
pub fn new(status: u128) -> Self {
Self::from_payload(AddListenSocketResponsePayload { status })
}
}
impl Message for AddListenSocketResponse {
type Payload = AddListenSocketResponsePayload;
type MessageClass = ResponseMsgType;
const MESSAGE_TYPE: Self::MessageClass = ResponseMsgType::AddListenSocket;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct AddPskBrokerRequestPayload {}
pub type AddPskBrokerRequest = RequestEnvelope<AddPskBrokerRequestPayload>;
impl Default for AddPskBrokerRequest {
fn default() -> Self {
Self::new()
}
}
impl AddPskBrokerRequest {
pub fn new() -> Self {
Self::from_payload(AddPskBrokerRequestPayload {})
}
}
impl Message for AddPskBrokerRequest {
type Payload = AddPskBrokerRequestPayload;
type MessageClass = RequestMsgType;
const MESSAGE_TYPE: Self::MessageClass = RequestMsgType::AddPskBroker;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}
pub mod add_psk_broker_response_status {
pub const OK: u128 = 0;
pub const INVALID_REQUEST: u128 = 1;
pub const INTERNAL_ERROR: u128 = 2;
}
#[repr(packed)]
#[derive(Debug, Copy, Clone, Hash, AsBytes, FromBytes, FromZeroes, PartialEq, Eq)]
pub struct AddPskBrokerResponsePayload {
pub status: u128,
}
pub type AddPskBrokerResponse = ResponseEnvelope<AddPskBrokerResponsePayload>;
impl AddPskBrokerResponse {
pub fn new(status: u128) -> Self {
Self::from_payload(AddPskBrokerResponsePayload { status })
}
}
impl Message for AddPskBrokerResponse {
type Payload = AddPskBrokerResponsePayload;
type MessageClass = ResponseMsgType;
const MESSAGE_TYPE: Self::MessageClass = ResponseMsgType::AddPskBroker;
fn from_payload(payload: Self::Payload) -> Self {
Self {
msg_type: Self::MESSAGE_TYPE.into(),
payload,
}
}
fn setup<B: ByteSliceMut>(buf: B) -> anyhow::Result<Ref<B, Self>> {
let mut r: Ref<B, Self> = buf.zk_zeroized()?;
r.init();
Ok(r)
}
fn init(&mut self) {
self.msg_type = Self::MESSAGE_TYPE.into();
}
}

View File

@@ -25,6 +25,9 @@ impl<B: ByteSlice> RequestRef<B> {
pub fn message_type(&self) -> RequestMsgType {
match self {
Self::Ping(_) => RequestMsgType::Ping,
Self::SupplyKeypair(_) => RequestMsgType::SupplyKeypair,
Self::AddListenSocket(_) => RequestMsgType::AddListenSocket,
Self::AddPskBroker(_) => RequestMsgType::AddPskBroker,
}
}
}
@@ -35,6 +38,24 @@ impl<B> From<Ref<B, PingRequest>> for RequestRef<B> {
}
}
impl<B> From<Ref<B, super::SupplyKeypairRequest>> for RequestRef<B> {
fn from(v: Ref<B, super::SupplyKeypairRequest>) -> Self {
Self::SupplyKeypair(v)
}
}
impl<B> From<Ref<B, super::AddListenSocketRequest>> for RequestRef<B> {
fn from(v: Ref<B, super::AddListenSocketRequest>) -> Self {
Self::AddListenSocket(v)
}
}
impl<B> From<Ref<B, super::AddPskBrokerRequest>> for RequestRef<B> {
fn from(v: Ref<B, super::AddPskBrokerRequest>) -> Self {
Self::AddPskBroker(v)
}
}
impl<B: ByteSlice> RequestRefMaker<B> {
fn new(buf: B) -> anyhow::Result<Self> {
let msg_type = buf.deref().request_msg_type_from_prefix()?;
@@ -48,6 +69,15 @@ impl<B: ByteSlice> RequestRefMaker<B> {
fn parse(self) -> anyhow::Result<RequestRef<B>> {
Ok(match self.msg_type {
RequestMsgType::Ping => RequestRef::Ping(self.buf.ping_request()?),
RequestMsgType::SupplyKeypair => {
RequestRef::SupplyKeypair(self.buf.supply_keypair_request()?)
}
RequestMsgType::AddListenSocket => {
RequestRef::AddListenSocket(self.buf.add_listen_socket_request()?)
}
RequestMsgType::AddPskBroker => {
RequestRef::AddPskBroker(self.buf.add_psk_broker_request()?)
}
})
}
@@ -82,6 +112,9 @@ impl<B: ByteSlice> RequestRefMaker<B> {
pub enum RequestRef<B> {
Ping(Ref<B, PingRequest>),
SupplyKeypair(Ref<B, super::SupplyKeypairRequest>),
AddListenSocket(Ref<B, super::AddListenSocketRequest>),
AddPskBroker(Ref<B, super::AddPskBrokerRequest>),
}
impl<B> RequestRef<B>
@@ -91,6 +124,9 @@ where
pub fn bytes(&self) -> &[u8] {
match self {
Self::Ping(r) => r.bytes(),
Self::SupplyKeypair(r) => r.bytes(),
Self::AddListenSocket(r) => r.bytes(),
Self::AddPskBroker(r) => r.bytes(),
}
}
}
@@ -102,6 +138,9 @@ where
pub fn bytes_mut(&mut self) -> &[u8] {
match self {
Self::Ping(r) => r.bytes_mut(),
Self::SupplyKeypair(r) => r.bytes_mut(),
Self::AddListenSocket(r) => r.bytes_mut(),
Self::AddPskBroker(r) => r.bytes_mut(),
}
}
}

View File

@@ -42,10 +42,49 @@ impl ResponseMsg for PingResponse {
type RequestMsg = PingRequest;
}
impl RequestMsg for super::SupplyKeypairRequest {
type ResponseMsg = super::SupplyKeypairResponse;
}
impl ResponseMsg for super::SupplyKeypairResponse {
type RequestMsg = super::SupplyKeypairRequest;
}
impl RequestMsg for super::AddListenSocketRequest {
type ResponseMsg = super::AddListenSocketResponse;
}
impl ResponseMsg for super::AddListenSocketResponse {
type RequestMsg = super::AddListenSocketRequest;
}
impl RequestMsg for super::AddPskBrokerRequest {
type ResponseMsg = super::AddPskBrokerResponse;
}
impl ResponseMsg for super::AddPskBrokerResponse {
type RequestMsg = super::AddPskBrokerRequest;
}
pub type PingPair<B1, B2> = (Ref<B1, PingRequest>, Ref<B2, PingResponse>);
pub type SupplyKeypairPair<B1, B2> = (
Ref<B1, super::SupplyKeypairRequest>,
Ref<B2, super::SupplyKeypairResponse>,
);
pub type AddListenSocketPair<B1, B2> = (
Ref<B1, super::AddListenSocketRequest>,
Ref<B2, super::AddListenSocketResponse>,
);
pub type AddPskBrokerPair<B1, B2> = (
Ref<B1, super::AddPskBrokerRequest>,
Ref<B2, super::AddPskBrokerResponse>,
);
pub enum RequestResponsePair<B1, B2> {
Ping(PingPair<B1, B2>),
SupplyKeypair(SupplyKeypairPair<B1, B2>),
AddListenSocket(AddListenSocketPair<B1, B2>),
AddPskBroker(AddPskBrokerPair<B1, B2>),
}
impl<B1, B2> From<PingPair<B1, B2>> for RequestResponsePair<B1, B2> {
@@ -54,6 +93,24 @@ impl<B1, B2> From<PingPair<B1, B2>> for RequestResponsePair<B1, B2> {
}
}
impl<B1, B2> From<SupplyKeypairPair<B1, B2>> for RequestResponsePair<B1, B2> {
fn from(v: SupplyKeypairPair<B1, B2>) -> Self {
RequestResponsePair::SupplyKeypair(v)
}
}
impl<B1, B2> From<AddListenSocketPair<B1, B2>> for RequestResponsePair<B1, B2> {
fn from(v: AddListenSocketPair<B1, B2>) -> Self {
RequestResponsePair::AddListenSocket(v)
}
}
impl<B1, B2> From<AddPskBrokerPair<B1, B2>> for RequestResponsePair<B1, B2> {
fn from(v: AddPskBrokerPair<B1, B2>) -> Self {
RequestResponsePair::AddPskBroker(v)
}
}
impl<B1, B2> RequestResponsePair<B1, B2>
where
B1: ByteSlice,
@@ -66,6 +123,21 @@ where
let res = ResponseRef::Ping(res.emancipate());
(req, res)
}
Self::SupplyKeypair((req, res)) => {
let req = RequestRef::SupplyKeypair(req.emancipate());
let res = ResponseRef::SupplyKeypair(res.emancipate());
(req, res)
}
Self::AddListenSocket((req, res)) => {
let req = RequestRef::AddListenSocket(req.emancipate());
let res = ResponseRef::AddListenSocket(res.emancipate());
(req, res)
}
Self::AddPskBroker((req, res)) => {
let req = RequestRef::AddPskBroker(req.emancipate());
let res = ResponseRef::AddPskBroker(res.emancipate());
(req, res)
}
}
}
@@ -90,6 +162,21 @@ where
let res = ResponseRef::Ping(res.emancipate_mut());
(req, res)
}
Self::SupplyKeypair((req, res)) => {
let req = RequestRef::SupplyKeypair(req.emancipate_mut());
let res = ResponseRef::SupplyKeypair(res.emancipate_mut());
(req, res)
}
Self::AddListenSocket((req, res)) => {
let req = RequestRef::AddListenSocket(req.emancipate_mut());
let res = ResponseRef::AddListenSocket(res.emancipate_mut());
(req, res)
}
Self::AddPskBroker((req, res)) => {
let req = RequestRef::AddPskBroker(req.emancipate_mut());
let res = ResponseRef::AddPskBroker(res.emancipate_mut());
(req, res)
}
}
}

View File

@@ -26,6 +26,9 @@ impl<B: ByteSlice> ResponseRef<B> {
pub fn message_type(&self) -> ResponseMsgType {
match self {
Self::Ping(_) => ResponseMsgType::Ping,
Self::SupplyKeypair(_) => ResponseMsgType::SupplyKeypair,
Self::AddListenSocket(_) => ResponseMsgType::AddListenSocket,
Self::AddPskBroker(_) => ResponseMsgType::AddPskBroker,
}
}
}
@@ -36,6 +39,24 @@ impl<B> From<Ref<B, PingResponse>> for ResponseRef<B> {
}
}
impl<B> From<Ref<B, super::SupplyKeypairResponse>> for ResponseRef<B> {
fn from(v: Ref<B, super::SupplyKeypairResponse>) -> Self {
Self::SupplyKeypair(v)
}
}
impl<B> From<Ref<B, super::AddListenSocketResponse>> for ResponseRef<B> {
fn from(v: Ref<B, super::AddListenSocketResponse>) -> Self {
Self::AddListenSocket(v)
}
}
impl<B> From<Ref<B, super::AddPskBrokerResponse>> for ResponseRef<B> {
fn from(v: Ref<B, super::AddPskBrokerResponse>) -> Self {
Self::AddPskBroker(v)
}
}
impl<B: ByteSlice> ResponseRefMaker<B> {
fn new(buf: B) -> anyhow::Result<Self> {
let msg_type = buf.deref().response_msg_type_from_prefix()?;
@@ -49,6 +70,15 @@ impl<B: ByteSlice> ResponseRefMaker<B> {
fn parse(self) -> anyhow::Result<ResponseRef<B>> {
Ok(match self.msg_type {
ResponseMsgType::Ping => ResponseRef::Ping(self.buf.ping_response()?),
ResponseMsgType::SupplyKeypair => {
ResponseRef::SupplyKeypair(self.buf.supply_keypair_response()?)
}
ResponseMsgType::AddListenSocket => {
ResponseRef::AddListenSocket(self.buf.add_listen_socket_response()?)
}
ResponseMsgType::AddPskBroker => {
ResponseRef::AddPskBroker(self.buf.add_psk_broker_response()?)
}
})
}
@@ -83,6 +113,9 @@ impl<B: ByteSlice> ResponseRefMaker<B> {
pub enum ResponseRef<B> {
Ping(Ref<B, PingResponse>),
SupplyKeypair(Ref<B, super::SupplyKeypairResponse>),
AddListenSocket(Ref<B, super::AddListenSocketResponse>),
AddPskBroker(Ref<B, super::AddPskBrokerResponse>),
}
impl<B> ResponseRef<B>
@@ -92,6 +125,9 @@ where
pub fn bytes(&self) -> &[u8] {
match self {
Self::Ping(r) => r.bytes(),
Self::SupplyKeypair(r) => r.bytes(),
Self::AddListenSocket(r) => r.bytes(),
Self::AddPskBroker(r) => r.bytes(),
}
}
}
@@ -103,6 +139,9 @@ where
pub fn bytes_mut(&mut self) -> &[u8] {
match self {
Self::Ping(r) => r.bytes_mut(),
Self::SupplyKeypair(r) => r.bytes_mut(),
Self::AddListenSocket(r) => r.bytes_mut(),
Self::AddPskBroker(r) => r.bytes_mut(),
}
}
}

View File

@@ -1,24 +1,128 @@
use super::{ByteSliceRefExt, Message, PingRequest, PingResponse, RequestRef, RequestResponsePair};
use std::{collections::VecDeque, os::fd::OwnedFd};
use zerocopy::{ByteSlice, ByteSliceMut};
use super::{ByteSliceRefExt, Message, PingRequest, PingResponse, RequestRef, RequestResponsePair};
pub trait Server {
fn ping(&mut self, req: &PingRequest, res: &mut PingResponse) -> anyhow::Result<()>;
/// This implements the handler for the [crate::api::RequestMsgType::Ping] API message
///
/// It merely takes a buffer and returns that same buffer.
fn ping(
&mut self,
req: &PingRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut PingResponse,
) -> anyhow::Result<()>;
/// Supply the cryptographic server keypair through file descriptor passing in the API
///
/// This implements the handler for the [crate::api::RequestMsgType::SupplyKeypair] API message.
///
/// # File descriptors
///
/// 1. The secret key (size must match exactly); the file descriptor must be backed by either
/// of
/// - file-system file
/// - [memfd](https://man.archlinux.org/man/memfd.2.en)
/// - [memfd_secret](https://man.archlinux.org/man/memfd.2.en)
/// 2. The public key (size must match exactly); the file descriptor must be backed by either
/// of
/// - file-system file
/// - [memfd](https://man.archlinux.org/man/memfd.2.en)
/// - [memfd_secret](https://man.archlinux.org/man/memfd.2.en)
///
/// # API Return Status
///
/// 1. [crate::api::supply_keypair_response_status::OK] - Indicates success
/// 2. [crate::api::supply_keypair_response_status::KEYPAIR_ALREADY_SUPPLIED] The endpoint was used but
/// the server already has server keys
/// 3. [crate::api::supply_keypair_response_status::INVALID_REQUEST] Malformed request; could be:
/// - Missing file descriptors for public key
/// - File descriptors contain data of invalid length
/// - Invalid file descriptor type
///
/// # Description
///
/// At startup, if no server keys are specified in the rosenpass configuration, and if the API
/// is enabled, the Rosenpass process waits for server keys to be supplied to the API. Before
/// then, any messages for the rosenpass cryptographic protocol are ignored and dropped all
/// cryptographic operations require access to the server keys.
///
/// Both private and public keys are specified through file descriptors and both are read from
/// their respective file descriptors into process memory. A file descriptor based transport is
/// used because of the excessive size of Classic McEliece public keys (100kb and up).
///
/// The file descriptors for the keys need not be backed by a file on disk. You can supply a
/// [memfd](https://man.archlinux.org/man/memfd.2.en) or [memfd_secret](https://man.archlinux.org/man/memfd_secret.2.en)
/// backed file descriptor if the server keys are not backed by a file system file.
fn supply_keypair(
&mut self,
req: &super::SupplyKeypairRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::SupplyKeypairResponse,
) -> anyhow::Result<()>;
/// Supply a new UDP listen socket through file descriptor passing via the API
///
/// This implements the handler for the [crate::api::RequestMsgType::AddListenSocket] API message.
///
/// # File descriptors
///
/// 1. The listen socket; must be backed by a UDP network listen socket
///
/// # API Return Status
///
/// 1. [crate::api::add_listen_socket_response_status::OK] - Indicates success
/// 2. [add_listen_socket_response_status::INVALID_REQUEST] Malformed request; could be:
/// - Missing file descriptors for public key
/// - Invalid file descriptor type
/// 3. [crate::api::add_listen_socket_response_status::INTERNAL_ERROR] Some other, non-fatal error
/// occured. Check the logs on log
///
/// # Description
///
/// This endpoint allows you to supply a UDP listen socket; it will be used to perform
/// cryptographic key exchanges via the Rosenpass protocol.
fn add_listen_socket(
&mut self,
req: &super::AddListenSocketRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::AddListenSocketResponse,
) -> anyhow::Result<()>;
fn add_psk_broker(
&mut self,
req: &super::AddPskBrokerRequest,
req_fds: &mut VecDeque<OwnedFd>,
res: &mut super::AddPskBrokerResponse,
) -> anyhow::Result<()>;
fn dispatch<ReqBuf, ResBuf>(
&mut self,
p: &mut RequestResponsePair<ReqBuf, ResBuf>,
req_fds: &mut VecDeque<OwnedFd>,
) -> anyhow::Result<()>
where
ReqBuf: ByteSlice,
ResBuf: ByteSliceMut,
{
match p {
RequestResponsePair::Ping((req, res)) => self.ping(req, res),
RequestResponsePair::Ping((req, res)) => self.ping(req, req_fds, res),
RequestResponsePair::SupplyKeypair((req, res)) => {
self.supply_keypair(req, req_fds, res)
}
RequestResponsePair::AddListenSocket((req, res)) => {
self.add_listen_socket(req, req_fds, res)
}
RequestResponsePair::AddPskBroker((req, res)) => self.add_psk_broker(req, req_fds, res),
}
}
fn handle_message<ReqBuf, ResBuf>(&mut self, req: ReqBuf, res: ResBuf) -> anyhow::Result<usize>
fn handle_message<ReqBuf, ResBuf>(
&mut self,
req: ReqBuf,
req_fds: &mut VecDeque<OwnedFd>,
res: ResBuf,
) -> anyhow::Result<usize>
where
ReqBuf: ByteSlice,
ResBuf: ByteSliceMut,
@@ -31,10 +135,25 @@ pub trait Server {
res.init();
RequestResponsePair::Ping((req, res))
}
RequestRef::SupplyKeypair(req) => {
let mut res = res.supply_keypair_response_from_prefix()?;
res.init();
RequestResponsePair::SupplyKeypair((req, res))
}
RequestRef::AddListenSocket(req) => {
let mut res = res.add_listen_socket_response_from_prefix()?;
res.init();
RequestResponsePair::AddListenSocket((req, res))
}
RequestRef::AddPskBroker(req) => {
let mut res = res.add_psk_broker_response_from_prefix()?;
res.init();
RequestResponsePair::AddPskBroker((req, res))
}
};
self.dispatch(&mut pair)?;
self.dispatch(&mut pair, req_fds)?;
let res_len = pair.request().bytes().len();
let res_len = pair.response().bytes().len();
Ok(res_len)
}
}

View File

@@ -38,4 +38,12 @@ impl ApiConfig {
Ok(())
}
pub fn count_api_sources(&self) -> usize {
self.listen_path.len() + self.listen_fd.len() + self.stream_fd.len()
}
pub fn has_api_sources(&self) -> bool {
self.count_api_sources() > 0
}
}

View File

@@ -1,44 +0,0 @@
use rosenpass_to::{ops::copy_slice, To};
use crate::protocol::CryptoServer;
use super::Server as ApiServer;
#[derive(Debug)]
pub struct CryptoServerApiState {
_dummy: (),
}
impl CryptoServerApiState {
#[allow(clippy::new_without_default)]
pub fn new() -> Self {
Self { _dummy: () }
}
pub fn acquire_backend<'a>(
&'a mut self,
crypto: &'a mut Option<CryptoServer>,
) -> CryptoServerApiHandler<'a> {
let state = self;
CryptoServerApiHandler { state, crypto }
}
}
pub struct CryptoServerApiHandler<'a> {
#[allow(unused)] // TODO: Remove
crypto: &'a mut Option<CryptoServer>,
#[allow(unused)] // TODO: Remove
state: &'a mut CryptoServerApiState,
}
impl<'a> ApiServer for CryptoServerApiHandler<'a> {
fn ping(
&mut self,
req: &super::PingRequest,
res: &mut super::PingResponse,
) -> anyhow::Result<()> {
let (req, res) = (&req.payload, &mut res.payload);
copy_slice(&req.echo).to(&mut res.echo);
Ok(())
}
}

View File

@@ -1,103 +1,188 @@
use mio::{net::UnixStream, Interest};
use std::borrow::{Borrow, BorrowMut};
use std::collections::VecDeque;
use std::os::fd::OwnedFd;
use mio::net::UnixStream;
use rosenpass_secret_memory::Secret;
use rosenpass_util::mio::ReadWithFileDescriptors;
use rosenpass_util::{
io::{IoResultKindHintExt, TryIoResultKindHintExt},
length_prefix_encoding::{
decoder::{self as lpe_decoder, LengthPrefixDecoder},
encoder::{self as lpe_encoder, LengthPrefixEncoder},
},
mio::interest::RW as MIO_RW,
};
use zeroize::Zeroize;
use crate::{api::Server, app_server::MioTokenDispenser, protocol::CryptoServer};
use crate::api::MAX_REQUEST_FDS;
use crate::{api::Server, app_server::AppServer};
use super::super::{CryptoServerApiState, MAX_REQUEST_LEN, MAX_RESPONSE_LEN};
use super::super::{ApiHandler, ApiHandlerContext};
#[derive(Debug)]
struct SecretBuffer<const N: usize>(pub Secret<N>);
impl<const N: usize> SecretBuffer<N> {
fn new() -> Self {
Self(Secret::zero())
}
}
impl<const N: usize> Borrow<[u8]> for SecretBuffer<N> {
fn borrow(&self) -> &[u8] {
self.0.secret()
}
}
impl<const N: usize> BorrowMut<[u8]> for SecretBuffer<N> {
fn borrow_mut(&mut self) -> &mut [u8] {
self.0.secret_mut()
}
}
// TODO: Unfortunately, zerocopy is quite particular about alignment, hence the 4096
type ReadBuffer = LengthPrefixDecoder<SecretBuffer<4096>>;
type WriteBuffer = LengthPrefixEncoder<SecretBuffer<4096>>;
type ReadFdBuffer = VecDeque<OwnedFd>;
#[derive(Debug)]
struct MioConnectionBuffers {
read_buffer: ReadBuffer,
write_buffer: WriteBuffer,
read_fd_buffer: ReadFdBuffer,
}
#[derive(Debug)]
pub struct MioConnection {
io: UnixStream,
mio_token: mio::Token,
invalid_read: bool,
read_buffer: LengthPrefixDecoder<[u8; MAX_REQUEST_LEN]>,
write_buffer: LengthPrefixEncoder<[u8; MAX_RESPONSE_LEN]>,
api_state: CryptoServerApiState,
buffers: Option<MioConnectionBuffers>,
api_handler: ApiHandler,
}
impl MioConnection {
pub fn new(
mut io: UnixStream,
registry: &mio::Registry,
token_dispenser: &mut MioTokenDispenser, // TODO: We should actually start using tokens…
) -> std::io::Result<Self> {
registry.register(
&mut io,
token_dispenser.dispense(),
Interest::READABLE | Interest::WRITABLE,
)?;
pub fn new(app_server: &mut AppServer, mut io: UnixStream) -> std::io::Result<Self> {
let mio_token = app_server.mio_token_dispenser.dispense();
app_server
.mio_poll
.registry()
.register(&mut io, mio_token, MIO_RW)?;
let invalid_read = false;
let read_buffer = LengthPrefixDecoder::new([0u8; MAX_REQUEST_LEN]);
let write_buffer = LengthPrefixEncoder::from_buffer([0u8; MAX_RESPONSE_LEN]);
let api_state = CryptoServerApiState::new();
Ok(Self {
io,
invalid_read,
let read_buffer = LengthPrefixDecoder::new(SecretBuffer::new());
let write_buffer = LengthPrefixEncoder::from_buffer(SecretBuffer::new());
let read_fd_buffer = VecDeque::new();
let buffers = Some(MioConnectionBuffers {
read_buffer,
write_buffer,
api_state,
read_fd_buffer,
});
let api_state = ApiHandler::new();
Ok(Self {
io,
mio_token,
invalid_read,
buffers,
api_handler: api_state,
})
}
pub fn poll(&mut self, crypto: &mut Option<CryptoServer>) -> anyhow::Result<()> {
self.flush_write_buffer()?;
if self.write_buffer.exhausted() {
self.recv(crypto)?;
}
pub fn should_close(&self) -> bool {
let exhausted = self
.buffers
.as_ref()
.map(|b| b.write_buffer.exhausted())
.unwrap_or(false);
self.invalid_read && exhausted
}
pub fn close(mut self, app_server: &mut AppServer) -> anyhow::Result<()> {
app_server.mio_poll.registry().deregister(&mut self.io)?;
Ok(())
}
// This is *exclusively* called by recv if the read_buffer holds a message
fn handle_incoming_message(&mut self, crypto: &mut Option<CryptoServer>) -> anyhow::Result<()> {
// Unwrap is allowed because recv() confirms before the call that a message was
// received
let req = self.read_buffer.message().unwrap().unwrap();
pub fn mio_token(&self) -> mio::Token {
self.mio_token
}
}
// TODO: The API should not return anyhow::Result
let response_len = self
.api_state
.acquire_backend(crypto)
.handle_message(req, self.write_buffer.buffer_bytes_mut())?;
self.read_buffer.zeroize(); // clear for new message to read
self.write_buffer
.restart_write_with_new_message(response_len)?;
pub trait MioConnectionContext {
fn mio_connection(&self) -> &MioConnection;
fn app_server(&self) -> &AppServer;
fn mio_connection_mut(&mut self) -> &mut MioConnection;
fn app_server_mut(&mut self) -> &mut AppServer;
fn poll(&mut self) -> anyhow::Result<()> {
macro_rules! short {
($e:expr) => {
match $e {
None => return Ok(()),
Some(()) => {}
}
};
}
// All of these functions return an error, None ("operation incomplete")
// or some ("operation complete, keep processing")
short!(self.flush_write_buffer()?); // Flush last message
short!(self.recv()?); // Receive new message
short!(self.handle_incoming_message()?); // Process new message with API
short!(self.flush_write_buffer()?); // Begin flushing response
self.flush_write_buffer()?;
Ok(())
}
fn flush_write_buffer(&mut self) -> anyhow::Result<()> {
if self.write_buffer.exhausted() {
return Ok(());
fn handle_incoming_message(&mut self) -> anyhow::Result<Option<()>> {
self.with_buffers_stolen(|this, bufs| {
// Acquire request & response. Caller is responsible to make sure
// that read buffer holds a message and that write buffer is cleared.
// Hence the unwraps and assertions
assert!(bufs.write_buffer.exhausted());
let req = bufs.read_buffer.message().unwrap().unwrap();
let req_fds = &mut bufs.read_fd_buffer;
let res = bufs.write_buffer.buffer_bytes_mut();
// Call API handler
// Transitive trait implementations: MioConnectionContext -> ApiHandlerContext -> as ApiServer
let response_len = this.handle_message(req, req_fds, res)?;
bufs.write_buffer
.restart_write_with_new_message(response_len)?;
bufs.read_buffer.zeroize(); // clear for new message to read
bufs.read_fd_buffer.clear();
Ok(Some(()))
})
}
fn flush_write_buffer(&mut self) -> anyhow::Result<Option<()>> {
if self.write_buf_mut().exhausted() {
return Ok(Some(()));
}
use lpe_encoder::WriteToIoReturn as Ret;
use std::io::ErrorKind as K;
loop {
use lpe_encoder::WriteToIoReturn as Ret;
use std::io::ErrorKind as K;
let conn = self.mio_connection_mut();
let bufs = conn.buffers.as_mut().unwrap();
match self
.write_buffer
.write_to_stdio(&self.io)
.io_err_kind_hint()
{
let sock = &conn.io;
let write_buf = &mut bufs.write_buffer;
match write_buf.write_to_stdio(sock).io_err_kind_hint() {
// Done
Ok(Ret { done: true, .. }) => {
self.write_buffer.zeroize(); // clear for new message to write
break;
write_buf.zeroize(); // clear for new message to write
break Ok(Some(()));
}
// Would block
Ok(Ret {
bytes_written: 0, ..
}) => break,
Err((_e, K::WouldBlock)) => break,
}) => break Ok(None),
Err((_e, K::WouldBlock)) => break Ok(None),
// Just continue
Ok(_) => continue, /* Ret { bytes_written > 0, done = false } acc. to previous cases*/
@@ -107,22 +192,31 @@ impl MioConnection {
Err((e, _ek)) => Err(e)?,
}
}
Ok(())
}
fn recv(&mut self, crypto: &mut Option<CryptoServer>) -> anyhow::Result<()> {
if !self.write_buffer.exhausted() || self.invalid_read {
return Ok(());
fn recv(&mut self) -> anyhow::Result<Option<()>> {
if !self.write_buf_mut().exhausted() || self.mio_connection().invalid_read {
return Ok(None);
}
loop {
use lpe_decoder::{ReadFromIoError as E, ReadFromIoReturn as Ret};
use std::io::ErrorKind as K;
use lpe_decoder::{ReadFromIoError as E, ReadFromIoReturn as Ret};
use std::io::ErrorKind as K;
match self
.read_buffer
.read_from_stdio(&self.io)
loop {
let conn = self.mio_connection_mut();
let bufs = conn.buffers.as_mut().unwrap();
let read_buf = &mut bufs.read_buffer;
let read_fd_buf = &mut bufs.read_fd_buffer;
let sock = &conn.io;
let fd_passing_sock = ReadWithFileDescriptors::<MAX_REQUEST_FDS, UnixStream, _, _>::new(
sock,
read_fd_buf,
);
match read_buf
.read_from_stdio(fd_passing_sock)
.try_io_err_kind_hint()
{
// We actually received a proper message
@@ -130,38 +224,98 @@ impl MioConnection {
Ok(Ret {
message: Some(_msg),
..
}) => {}
}) => break Ok(Some(())),
// Message does not fit in buffer
Err((e @ E::MessageTooLargeError { .. }, _)) => {
log::warn!("Received message on API that was too big to fit in our buffers; \
looks like the client is broken. Stopping to process messages of the client.\n\
Error: {e:?}");
// TODO: We should properly close down the socket in this case, but to do that,
// we need to have the facilities in the Rosenpass IO handling system to close
// open connections.
// Just leaving the API connections dangling for now.
// This should be fixed for non-experimental use of the API.
self.invalid_read = true;
break;
looks like the client is broken. Stopping to process messages of the client.\n\
Error: {e:?}");
conn.invalid_read = true; // Closed mio_manager
break Ok(None);
}
// Would block
Ok(Ret { bytes_read: 0, .. }) => break,
Err((_, Some(K::WouldBlock))) => break,
Ok(Ret { bytes_read: 0, .. }) => break Ok(None),
Err((_, Some(K::WouldBlock))) => break Ok(None),
// Just keep going
Ok(Ret { bytes_read: _, .. }) => continue,
Err((_, Some(K::Interrupted))) => continue,
// Other IO Error (just pass on to the caller)
Err((E::IoError(e), _)) => Err(e)?,
Err((E::IoError(e), _)) => {
log::warn!(
"IO error while trying to read message from API socket. \
The connection is broken. Stopping to process messages of the client.\n\
Error: {e:?}"
);
conn.invalid_read = true; // closed later by mio_manager
break Err(e.into());
}
};
self.handle_incoming_message(crypto)?;
break; // Handle just one message, leave some room for other IO handlers
}
}
Ok(())
fn mio_token(&self) -> mio::Token {
self.mio_connection().mio_token()
}
fn should_close(&self) -> bool {
self.mio_connection().should_close()
}
}
trait MioConnectionContextPrivate: MioConnectionContext {
fn steal_buffers(&mut self) -> MioConnectionBuffers {
self.mio_connection_mut().buffers.take().unwrap()
}
fn return_buffers(&mut self, buffers: MioConnectionBuffers) {
let opt = &mut self.mio_connection_mut().buffers;
assert!(opt.is_none());
let _ = opt.insert(buffers);
}
fn with_buffers_stolen<R, F: FnOnce(&mut Self, &mut MioConnectionBuffers) -> R>(
&mut self,
f: F,
) -> R {
let mut bufs = self.steal_buffers();
let res = f(self, &mut bufs);
self.return_buffers(bufs);
res
}
fn write_buf_mut(&mut self) -> &mut WriteBuffer {
self.mio_connection_mut()
.buffers
.as_mut()
.unwrap()
.write_buffer
.borrow_mut()
}
}
impl<T> MioConnectionContextPrivate for T where T: ?Sized + MioConnectionContext {}
impl<T> ApiHandlerContext for T
where
T: ?Sized + MioConnectionContext,
{
fn api_handler(&self) -> &ApiHandler {
&self.mio_connection().api_handler
}
fn app_server(&self) -> &AppServer {
MioConnectionContext::app_server(self)
}
fn api_handler_mut(&mut self) -> &mut ApiHandler {
&mut self.mio_connection_mut().api_handler
}
fn app_server_mut(&mut self) -> &mut AppServer {
MioConnectionContext::app_server_mut(self)
}
}

View File

@@ -1,82 +1,120 @@
use std::io;
use std::{borrow::BorrowMut, io};
use mio::net::{UnixListener, UnixStream};
use rosenpass_util::{io::nonblocking_handle_io_errors, mio::interest::RW as MIO_RW};
use rosenpass_util::{
functional::ApplyExt, io::nonblocking_handle_io_errors, mio::interest::RW as MIO_RW,
};
use crate::{app_server::MioTokenDispenser, protocol::CryptoServer};
use crate::app_server::{AppServer, AppServerIoSource};
use super::MioConnection;
use super::{MioConnection, MioConnectionContext};
#[derive(Default, Debug)]
pub struct MioManager {
listeners: Vec<UnixListener>,
connections: Vec<MioConnection>,
connections: Vec<Option<MioConnection>>,
}
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
pub enum MioManagerIoSource {
Listener(usize),
Connection(usize),
}
impl MioManager {
pub fn new() -> Self {
Self::default()
}
}
struct MioConnectionFocus<'a, T: ?Sized + MioManagerContext> {
ctx: &'a mut T,
conn_idx: usize,
}
impl<'a, T: ?Sized + MioManagerContext> MioConnectionFocus<'a, T> {
fn new(ctx: &'a mut T, conn_idx: usize) -> Self {
Self { ctx, conn_idx }
}
}
pub trait MioManagerContext {
fn mio_manager(&self) -> &MioManager;
fn mio_manager_mut(&mut self) -> &mut MioManager;
fn app_server(&self) -> &AppServer;
fn app_server_mut(&mut self) -> &mut AppServer;
fn add_listener(&mut self, mut listener: UnixListener) -> io::Result<()> {
let srv = self.app_server_mut();
let mio_token = srv.mio_token_dispenser.dispense();
srv.mio_poll
.registry()
.register(&mut listener, mio_token, MIO_RW)?;
let io_source = self
.mio_manager()
.listeners
.len()
.apply(MioManagerIoSource::Listener)
.apply(AppServerIoSource::MioManager);
self.mio_manager_mut().listeners.push(listener);
self.app_server_mut()
.register_io_source(mio_token, io_source);
pub fn add_listener(
&mut self,
mut listener: UnixListener,
registry: &mio::Registry,
token_dispenser: &mut MioTokenDispenser,
) -> io::Result<()> {
registry.register(&mut listener, token_dispenser.dispense(), MIO_RW)?;
self.listeners.push(listener);
Ok(())
}
pub fn add_connection(
&mut self,
connection: UnixStream,
registry: &mio::Registry,
token_dispenser: &mut MioTokenDispenser,
) -> io::Result<()> {
let connection = MioConnection::new(connection, registry, token_dispenser)?;
self.connections.push(connection);
fn add_connection(&mut self, connection: UnixStream) -> io::Result<()> {
let connection = MioConnection::new(self.app_server_mut(), connection)?;
let mio_token = connection.mio_token();
let conns: &mut Vec<Option<MioConnection>> =
self.mio_manager_mut().connections.borrow_mut();
let idx = conns
.iter_mut()
.enumerate()
.find(|(_, slot)| slot.is_some())
.map(|(idx, _)| idx)
.unwrap_or(conns.len());
conns.insert(idx, Some(connection));
let io_source = idx
.apply(MioManagerIoSource::Listener)
.apply(AppServerIoSource::MioManager);
self.app_server_mut()
.register_io_source(mio_token, io_source);
Ok(())
}
pub fn poll(
&mut self,
crypto: &mut Option<CryptoServer>,
registry: &mio::Registry,
token_dispenser: &mut MioTokenDispenser,
) -> anyhow::Result<()> {
self.accept_connections(registry, token_dispenser)?;
self.poll_connections(crypto)?;
fn poll_particular(&mut self, io_source: MioManagerIoSource) -> anyhow::Result<()> {
use MioManagerIoSource as S;
match io_source {
S::Listener(idx) => self.accept_from(idx)?,
S::Connection(idx) => self.poll_particular_connection(idx)?,
};
Ok(())
}
fn accept_connections(
&mut self,
registry: &mio::Registry,
token_dispenser: &mut MioTokenDispenser,
) -> io::Result<()> {
for idx in 0..self.listeners.len() {
self.accept_from(idx, registry, token_dispenser)?;
fn poll(&mut self) -> anyhow::Result<()> {
self.accept_connections()?;
self.poll_connections()?;
Ok(())
}
fn accept_connections(&mut self) -> io::Result<()> {
for idx in 0..self.mio_manager_mut().listeners.len() {
self.accept_from(idx)?;
}
Ok(())
}
fn accept_from(
&mut self,
idx: usize,
registry: &mio::Registry,
token_dispenser: &mut MioTokenDispenser,
) -> io::Result<()> {
fn accept_from(&mut self, idx: usize) -> io::Result<()> {
// Accept connection until the socket would block or returns another error
// TODO: This currently only adds connections--we eventually need the ability to remove
// them as well, see the note in connection.rs
loop {
match nonblocking_handle_io_errors(|| self.listeners[idx].accept())? {
match nonblocking_handle_io_errors(|| self.mio_manager().listeners[idx].accept())? {
None => break,
Some((conn, _addr)) => {
self.add_connection(conn, registry, token_dispenser)?;
self.add_connection(conn)?;
}
};
}
@@ -84,10 +122,52 @@ impl MioManager {
Ok(())
}
fn poll_connections(&mut self, crypto: &mut Option<CryptoServer>) -> anyhow::Result<()> {
for conn in self.connections.iter_mut() {
conn.poll(crypto)?
fn poll_connections(&mut self) -> anyhow::Result<()> {
for idx in 0..self.mio_manager().connections.len() {
self.poll_particular_connection(idx)?;
}
Ok(())
}
fn poll_particular_connection(&mut self, idx: usize) -> anyhow::Result<()> {
if self.mio_manager().connections[idx].is_none() {
return Ok(());
}
let mut conn = MioConnectionFocus::new(self, idx);
conn.poll()?;
if conn.should_close() {
let conn = self.mio_manager_mut().connections[idx].take().unwrap();
let mio_token = conn.mio_token();
if let Err(e) = conn.close(self.app_server_mut()) {
log::warn!("Error while closing API connection {e:?}");
};
self.app_server_mut().unregister_io_source(mio_token);
}
Ok(())
}
}
impl<T: ?Sized + MioManagerContext> MioConnectionContext for MioConnectionFocus<'_, T> {
fn mio_connection(&self) -> &MioConnection {
self.ctx.mio_manager().connections[self.conn_idx]
.as_ref()
.unwrap()
}
fn app_server(&self) -> &AppServer {
self.ctx.app_server()
}
fn mio_connection_mut(&mut self) -> &mut MioConnection {
self.ctx.mio_manager_mut().connections[self.conn_idx]
.as_mut()
.unwrap()
}
fn app_server_mut(&mut self) -> &mut AppServer {
self.ctx.app_server_mut()
}
}

View File

@@ -1,8 +1,8 @@
mod api_handler;
mod boilerplate;
mod crypto_server_api_handler;
pub use api_handler::*;
pub use boilerplate::*;
pub use crypto_server_api_handler::*;
pub mod cli;
pub mod config;

View File

@@ -8,7 +8,14 @@ use mio::Interest;
use mio::Token;
use rosenpass_secret_memory::Public;
use rosenpass_secret_memory::Secret;
use rosenpass_util::build::ConstructionSite;
use rosenpass_util::file::StoreValueB64;
use rosenpass_util::functional::run;
use rosenpass_util::functional::ApplyExt;
use rosenpass_util::io::IoResultKindHintExt;
use rosenpass_util::io::SubstituteForIoErrorKindExt;
use rosenpass_util::option::SomeExt;
use rosenpass_util::result::OkExt;
use rosenpass_wireguard_broker::WireguardBrokerMio;
use rosenpass_wireguard_broker::{WireguardBrokerCfg, WG_KEY_LEN};
use zerocopy::AsBytes;
@@ -16,7 +23,9 @@ use zerocopy::AsBytes;
use std::cell::Cell;
use std::collections::HashMap;
use std::collections::VecDeque;
use std::fmt::Debug;
use std::io;
use std::io::stdout;
use std::io::ErrorKind;
use std::io::Write;
@@ -31,6 +40,7 @@ use std::slice;
use std::time::Duration;
use std::time::Instant;
use crate::protocol::BuildCryptoServer;
use crate::protocol::HostIdentification;
use crate::{
config::Verbosity,
@@ -75,7 +85,7 @@ impl MioTokenDispenser {
#[derive(Debug, Default)]
pub struct BrokerStore {
store: HashMap<
pub store: HashMap<
Public<BROKER_ID_BYTES>,
Box<dyn WireguardBrokerMio<Error = anyhow::Error, MioError = anyhow::Error>>,
>,
@@ -141,15 +151,28 @@ pub struct AppServerTest {
pub termination_handler: Option<std::sync::mpsc::Receiver<()>>,
}
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
pub enum AppServerIoSource {
Socket(usize),
PskBroker(Public<BROKER_ID_BYTES>),
#[cfg(feature = "experiment_api")]
MioManager(crate::api::mio::MioManagerIoSource),
}
const EVENT_CAPACITY: usize = 20;
/// Holds the state of the application, namely the external IO
///
/// Responsible for file IO, network IO
// TODO add user control via unix domain socket and stdin/stdout
#[derive(Debug)]
pub struct AppServer {
pub crypt: Option<CryptoServer>,
pub crypto_site: ConstructionSite<BuildCryptoServer, CryptoServer>,
pub sockets: Vec<mio::net::UdpSocket>,
pub events: mio::Events,
pub short_poll_queue: VecDeque<mio::event::Event>,
pub performed_long_poll: bool,
pub io_source_index: HashMap<mio::Token, AppServerIoSource>,
pub mio_poll: mio::Poll,
pub mio_token_dispenser: MioTokenDispenser,
pub brokers: BrokerStore,
@@ -512,15 +535,14 @@ impl HostPathDiscoveryEndpoint {
impl AppServer {
pub fn new(
sk: SSk,
pk: SPk,
keypair: Option<(SSk, SPk)>,
addrs: Vec<SocketAddr>,
verbosity: Verbosity,
test_helpers: Option<AppServerTest>,
) -> anyhow::Result<Self> {
// setup mio
let mio_poll = mio::Poll::new()?;
let events = mio::Events::with_capacity(20);
let events = mio::Events::with_capacity(EVENT_CAPACITY);
let mut mio_token_dispenser = MioTokenDispenser::default();
// bind each SocketAddr to a socket
@@ -595,22 +617,30 @@ impl AppServer {
}
// register all sockets to mio
for socket in sockets.iter_mut() {
mio_poll.registry().register(
socket,
mio_token_dispenser.dispense(),
Interest::READABLE,
)?;
let mut io_source_index = HashMap::new();
for (idx, socket) in sockets.iter_mut().enumerate() {
let mio_token = mio_token_dispenser.dispense();
mio_poll
.registry()
.register(socket, mio_token, Interest::READABLE)?;
let prev = io_source_index.insert(mio_token, AppServerIoSource::Socket(idx));
assert!(prev.is_none());
}
// TODO use mio::net::UnixStream together with std::os::unix::net::UnixStream for Linux
let crypto_site = match keypair {
Some((sk, pk)) => ConstructionSite::from_product(CryptoServer::new(sk, pk)),
None => ConstructionSite::new(BuildCryptoServer::empty()),
};
Ok(Self {
crypt: Some(CryptoServer::new(sk, pk)),
crypto_site,
peers: Vec::new(),
verbosity,
sockets,
events,
short_poll_queue: Default::default(),
performed_long_poll: false,
io_source_index,
mio_poll,
mio_token_dispenser,
brokers: BrokerStore::default(),
@@ -627,14 +657,14 @@ impl AppServer {
}
pub fn crypto_server(&self) -> anyhow::Result<&CryptoServer> {
self.crypt
.as_ref()
self.crypto_site
.product_ref()
.context("Cryptography handler not initialized")
}
pub fn crypto_server_mut(&mut self) -> anyhow::Result<&mut CryptoServer> {
self.crypt
.as_mut()
self.crypto_site
.product_mut()
.context("Cryptography handler not initialized")
}
@@ -642,41 +672,57 @@ impl AppServer {
matches!(self.verbosity, Verbosity::Verbose)
}
pub fn register_listen_socket(&mut self, mut sock: mio::net::UdpSocket) -> anyhow::Result<()> {
let mio_token = self.mio_token_dispenser.dispense();
self.mio_poll
.registry()
.register(&mut sock, mio_token, mio::Interest::READABLE)?;
let io_source = self.sockets.len().apply(AppServerIoSource::Socket);
self.sockets.push(sock);
self.register_io_source(mio_token, io_source);
Ok(())
}
pub fn register_io_source(&mut self, token: mio::Token, io_source: AppServerIoSource) {
let prev = self.io_source_index.insert(token, io_source);
assert!(prev.is_none());
}
pub fn unregister_io_source(&mut self, token: mio::Token) {
let value = self.io_source_index.remove(&token);
assert!(value.is_some(), "Removed IO source that does not exist");
}
pub fn register_broker(
&mut self,
broker: Box<dyn WireguardBrokerMio<Error = anyhow::Error, MioError = anyhow::Error>>,
) -> Result<BrokerStorePtr> {
let ptr = Public::from_slice((self.brokers.store.len() as u64).as_bytes());
if self.brokers.store.insert(ptr, broker).is_some() {
bail!("Broker already registered");
}
let mio_token = self.mio_token_dispenser.dispense();
let io_source = ptr.apply(AppServerIoSource::PskBroker);
//Register broker
self.brokers
.store
.get_mut(&ptr)
.ok_or(anyhow::format_err!("Broker wasn't added to registry"))?
.register(
self.mio_poll.registry(),
self.mio_token_dispenser.dispense(),
)?;
.register(self.mio_poll.registry(), mio_token)?;
self.register_io_source(mio_token, io_source);
Ok(BrokerStorePtr(ptr))
}
pub fn unregister_broker(&mut self, ptr: BrokerStorePtr) -> Result<()> {
//Unregister broker
self.brokers
.store
.get_mut(&ptr.0)
.ok_or_else(|| anyhow::anyhow!("Broker not found"))?
.unregister(self.mio_poll.registry())?;
//Remove broker from store
self.brokers
let mut broker = self
.brokers
.store
.remove(&ptr.0)
.ok_or_else(|| anyhow::anyhow!("Broker not found"))?;
.context("Broker not found")?;
self.unregister_io_source(broker.mio_token().unwrap());
broker.unregister(self.mio_poll.registry())?;
Ok(())
}
@@ -688,8 +734,13 @@ impl AppServer {
broker_peer: Option<BrokerPeer>,
hostname: Option<String>,
) -> anyhow::Result<AppPeerPtr> {
let PeerPtr(pn) = self.crypto_server_mut()?.add_peer(psk, pk)?;
let PeerPtr(pn) = match &mut self.crypto_site {
ConstructionSite::Void => bail!("Crypto server construction site is void"),
ConstructionSite::Builder(builder) => builder.add_peer(psk, pk),
ConstructionSite::Product(srv) => srv.add_peer(psk, pk)?,
};
assert!(pn == self.peers.len());
let initial_endpoint = hostname
.map(Endpoint::discovery_from_hostname)
.transpose()?;
@@ -731,7 +782,7 @@ impl AppServer {
);
if tries_left > 0 {
error!("re-initializing networking in {sleep}! {tries_left} tries left.");
std::thread::sleep(self.crypto_server_mut()?.timebase.dur(sleep));
std::thread::sleep(Duration::from_secs_f64(sleep));
continue;
}
@@ -774,16 +825,31 @@ impl AppServer {
}
}
match self.poll(&mut *rx)? {
#[allow(clippy::redundant_closure_call)]
SendInitiation(peer) => tx_maybe_with!(peer, || self
enum CryptoSrv {
Avail,
Missing,
}
let poll_result = self.poll(&mut *rx)?;
let have_crypto = match self.crypto_site.is_available() {
true => CryptoSrv::Avail,
false => CryptoSrv::Missing,
};
#[allow(clippy::redundant_closure_call)]
match (have_crypto, poll_result) {
(CryptoSrv::Missing, SendInitiation(_)) => {}
(CryptoSrv::Avail, SendInitiation(peer)) => tx_maybe_with!(peer, || self
.crypto_server_mut()?
.initiate_handshake(peer.lower(), &mut *tx))?,
#[allow(clippy::redundant_closure_call)]
SendRetransmission(peer) => tx_maybe_with!(peer, || self
(CryptoSrv::Missing, SendRetransmission(_)) => {}
(CryptoSrv::Avail, SendRetransmission(peer)) => tx_maybe_with!(peer, || self
.crypto_server_mut()?
.retransmit_handshake(peer.lower(), &mut *tx))?,
DeleteKey(peer) => {
(CryptoSrv::Missing, DeleteKey(_)) => {}
(CryptoSrv::Avail, DeleteKey(peer)) => {
self.output_key(peer, Stale, &SymKey::random())?;
// There was a loss of connection apparently; restart host discovery
@@ -797,7 +863,8 @@ impl AppServer {
);
}
ReceivedMessage(len, endpoint) => {
(CryptoSrv::Missing, ReceivedMessage(_, _)) => {}
(CryptoSrv::Avail, ReceivedMessage(len, endpoint)) => {
let msg_result = match self.under_load {
DoSOperation::UnderLoad => {
self.handle_msg_under_load(&endpoint, &rx[..len], &mut *tx)
@@ -910,17 +977,32 @@ impl AppServer {
pub fn poll(&mut self, rx_buf: &mut [u8]) -> anyhow::Result<AppPollResult> {
use crate::protocol::PollResult as C;
use AppPollResult as A;
loop {
return Ok(match self.crypto_server_mut()?.poll()? {
C::DeleteKey(PeerPtr(no)) => A::DeleteKey(AppPeerPtr(no)),
C::SendInitiation(PeerPtr(no)) => A::SendInitiation(AppPeerPtr(no)),
C::SendRetransmission(PeerPtr(no)) => A::SendRetransmission(AppPeerPtr(no)),
C::Sleep(timeout) => match self.try_recv(rx_buf, timeout)? {
Some((len, addr)) => A::ReceivedMessage(len, addr),
None => continue,
},
});
}
let res = loop {
// Call CryptoServer's poll (if available)
let crypto_poll = self
.crypto_site
.product_mut()
.map(|crypto| crypto.poll())
.transpose()?;
// Map crypto server's poll result to our poll result
let io_poll_timeout = match crypto_poll {
Some(C::DeleteKey(PeerPtr(no))) => break A::DeleteKey(AppPeerPtr(no)),
Some(C::SendInitiation(PeerPtr(no))) => break A::SendInitiation(AppPeerPtr(no)),
Some(C::SendRetransmission(PeerPtr(no))) => {
break A::SendRetransmission(AppPeerPtr(no))
}
Some(C::Sleep(timeout)) => timeout, // No event from crypto-server, do IO
None => crate::protocol::UNENDING, // Crypto server is uninitialized, do IO
};
// Perform IO (look for a message)
if let Some((len, addr)) = self.try_recv(rx_buf, io_poll_timeout)? {
break A::ReceivedMessage(len, addr);
}
};
Ok(res)
}
/// Tries to receive a new message
@@ -958,22 +1040,33 @@ impl AppServer {
// readiness event seems to be good enough™ for now.
// only poll if we drained all sockets before
if self.all_sockets_drained {
//Non blocked polling
self.mio_poll
.poll(&mut self.events, Some(Duration::from_secs(0)))?;
if self.events.iter().peekable().peek().is_none() {
// if there are no events, then add to blocking poll count
self.blocking_polls_count += 1;
//Execute blocking poll
self.mio_poll.poll(&mut self.events, Some(timeout))?;
} else {
self.non_blocking_polls_count += 1;
run(|| -> anyhow::Result<()> {
if !self.all_sockets_drained || !self.short_poll_queue.is_empty() {
self.unpolled_count += 1;
return Ok(());
}
} else {
self.unpolled_count += 1;
}
self.perform_mio_poll_and_register_events(Duration::from_secs(0))?; // Non-blocking poll
if !self.short_poll_queue.is_empty() {
// Got some events in non-blocking mode
self.non_blocking_polls_count += 1;
return Ok(());
}
if !self.performed_long_poll {
// pass go perform a full long poll before we enter blocking poll mode
// to make sure our experimental short poll feature did not miss any events
// due to being buggy.
return Ok(());
}
// Perform and register blocking poll
self.blocking_polls_count += 1;
self.perform_mio_poll_and_register_events(timeout)?;
self.performed_long_poll = false;
Ok(())
})?;
if let Some(AppServerTest {
enable_dos_permanently: true,
@@ -1008,26 +1101,58 @@ impl AppServer {
}
}
// Focused polling i.e. actually using mio::Token is experimental for now.
// The reason for this is that we need to figure out how to integrate load detection
// and focused polling for one. Mio event-based polling also does not play nice with
// the current function signature and its reentrant design which is focused around receiving UDP socket packages
// for processing by the crypto protocol server.
// Besides that, there are also some parts of the code which intentionally block
// despite available data. This is the correct behavior; e.g. api::mio::Connection blocks
// further reads from its unix socket until the write buffer is flushed. In other words
// the connection handler makes sure that there is a buffer to put the response in while
// before reading further request.
// The potential problem with this behavior is that we end up ignoring instructions from
// epoll() to read from the particular sockets, so epoll will return information about that
// particular blocked file descriptor every call. We have only so many event slots and
// in theory, the event array could fill up entirely with intentionally blocked sockets.
// We need to figure out how to deal with this situation.
// Mio uses uses epoll in level-triggered mode, so we could handle taint-tracking for ignored
// sockets ourselves. The facilities are available in epoll and Mio, but we need to figure out how mio uses those
// facilities and how we can integrate them here.
// This will involve rewriting a lot of IO code and we should probably have integration
// tests before we approach that.
//
// This hybrid approach is not without merit though; the short poll implementation covers
// all our IO sources, so under contention, rosenpass should generally not hit the long
// poll mode below. We keep short polling and calling epoll() in non-blocking mode (timeout
// of zero) until we run out of IO events processed. Then, just before we would perform a
// blocking poll, we go through all available IO sources to see if we missed anything.
{
while let Some(ev) = self.short_poll_queue.pop_front() {
if let Some(v) = self.try_recv_from_mio_token(buf, ev.token())? {
return Ok(Some(v));
}
}
}
// drain all sockets
let mut would_block_count = 0;
for (sock_no, socket) in self.sockets.iter_mut().enumerate() {
match socket.recv_from(buf) {
Ok((n, addr)) => {
for sock_no in 0..self.sockets.len() {
match self
.try_recv_from_listen_socket(buf, sock_no)
.io_err_kind_hint()
{
Ok(None) => continue,
Ok(Some(v)) => {
// at least one socket was not drained...
self.all_sockets_drained = false;
return Ok(Some((
n,
Endpoint::SocketBoundAddress(SocketBoundEndpoint::new(
SocketPtr(sock_no),
addr,
)),
)));
return Ok(Some(v));
}
Err(e) if e.kind() == ErrorKind::WouldBlock => {
Err((_, ErrorKind::WouldBlock)) => {
would_block_count += 1;
}
// TODO if one socket continuously returns an error, then we never poll, thus we never wait for a timeout, thus we have a spin-lock
Err(e) => return Err(e.into()),
Err((e, _)) => return Err(e)?,
}
}
@@ -1042,30 +1167,124 @@ impl AppServer {
// API poll
#[cfg(feature = "experiment_api")]
self.api_manager.poll(
&mut self.crypt,
self.mio_poll.registry(),
&mut self.mio_token_dispenser,
)?;
{
use crate::api::mio::MioManagerContext;
MioManagerFocus(self).poll()?;
}
self.performed_long_poll = true;
Ok(None)
}
fn perform_mio_poll_and_register_events(&mut self, timeout: Duration) -> io::Result<()> {
self.mio_poll.poll(&mut self.events, Some(timeout))?;
// Fill the short poll buffer with the acquired events
self.events
.iter()
.cloned()
.for_each(|v| self.short_poll_queue.push_back(v));
Ok(())
}
fn try_recv_from_mio_token(
&mut self,
buf: &mut [u8],
token: mio::Token,
) -> anyhow::Result<Option<(usize, Endpoint)>> {
let io_source = match self.io_source_index.get(&token) {
Some(io_source) => *io_source,
None => {
log::warn!("No IO source assiociated with mio token ({token:?}). Polling using mio tokens directly is an experimental feature and IO handler should recover when all available io sources are polled. This is a developer error. Please report it.");
return Ok(None);
}
};
self.try_recv_from_io_source(buf, io_source)
}
fn try_recv_from_io_source(
&mut self,
buf: &mut [u8],
io_source: AppServerIoSource,
) -> anyhow::Result<Option<(usize, Endpoint)>> {
match io_source {
AppServerIoSource::Socket(idx) => self
.try_recv_from_listen_socket(buf, idx)
.substitute_for_ioerr_wouldblock(None)?
.ok(),
AppServerIoSource::PskBroker(key) => self
.brokers
.store
.get_mut(&key)
.with_context(|| format!("No PSK broker under key {key:?}"))?
.process_poll()
.map(|_| None),
#[cfg(feature = "experiment_api")]
AppServerIoSource::MioManager(mmio_src) => {
use crate::api::mio::MioManagerContext;
MioManagerFocus(self)
.poll_particular(mmio_src)
.map(|_| None)
}
}
}
fn try_recv_from_listen_socket(
&mut self,
buf: &mut [u8],
idx: usize,
) -> io::Result<Option<(usize, Endpoint)>> {
use std::io::ErrorKind as K;
let (n, addr) = loop {
match self.sockets[idx].recv_from(buf).io_err_kind_hint() {
Ok(v) => break v,
Err((_, K::Interrupted)) => continue,
Err((e, _)) => return Err(e)?,
}
};
SocketPtr(idx)
.apply(|sp| SocketBoundEndpoint::new(sp, addr))
.apply(Endpoint::SocketBoundAddress)
.apply(|ep| (n, ep))
.some()
.ok()
}
#[cfg(feature = "experiment_api")]
pub fn add_api_connection(&mut self, connection: mio::net::UnixStream) -> std::io::Result<()> {
self.api_manager.add_connection(
connection,
self.mio_poll.registry(),
&mut self.mio_token_dispenser,
)
use crate::api::mio::MioManagerContext;
MioManagerFocus(self).add_connection(connection)
}
#[cfg(feature = "experiment_api")]
pub fn add_api_listener(&mut self, listener: mio::net::UnixListener) -> std::io::Result<()> {
self.api_manager.add_listener(
listener,
self.mio_poll.registry(),
&mut self.mio_token_dispenser,
)
use crate::api::mio::MioManagerContext;
MioManagerFocus(self).add_listener(listener)
}
}
#[cfg(feature = "experiment_api")]
struct MioManagerFocus<'a>(&'a mut AppServer);
#[cfg(feature = "experiment_api")]
impl crate::api::mio::MioManagerContext for MioManagerFocus<'_> {
fn mio_manager(&self) -> &crate::api::mio::MioManager {
&self.0.api_manager
}
fn mio_manager_mut(&mut self) -> &mut crate::api::mio::MioManager {
&mut self.0.api_manager
}
fn app_server(&self) -> &AppServer {
self.0
}
fn app_server_mut(&mut self) -> &mut AppServer {
self.0
}
}

View File

@@ -76,6 +76,12 @@ fn main() -> Result<()> {
vec![
Tree::Leaf("Ping Request".to_owned()),
Tree::Leaf("Ping Response".to_owned()),
Tree::Leaf("Supply Keypair Request".to_owned()),
Tree::Leaf("Supply Keypair Response".to_owned()),
Tree::Leaf("Add Listen Socket Request".to_owned()),
Tree::Leaf("Add Listen Socket Response".to_owned()),
Tree::Leaf("Add Psk Broker Request".to_owned()),
Tree::Leaf("Add Psk Broker Response".to_owned()),
],
)],
);

View File

@@ -1,4 +1,4 @@
use anyhow::{bail, ensure};
use anyhow::{bail, ensure, Context};
use clap::{Parser, Subcommand};
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::StaticKem;
@@ -16,19 +16,42 @@ use crate::protocol::{SPk, SSk, SymKey};
use super::config;
#[cfg(feature = "experiment_api")]
use {
command_fds::{CommandFdExt, FdMapping},
log::{error, info},
mio::net::UnixStream,
rosenpass_util::fd::claim_fd,
rosenpass_wireguard_broker::brokers::mio_client::MioBrokerClient,
rosenpass_wireguard_broker::WireguardBrokerMio,
rustix::net::{socketpair, AddressFamily, SocketFlags, SocketType},
std::os::fd::AsRawFd,
std::os::unix::net,
std::process::Command,
std::thread,
};
/// enum representing a choice of interface to a WireGuard broker
#[derive(Debug)]
pub enum BrokerInterface {
Socket(PathBuf),
FileDescriptor(i32),
SocketPair,
}
/// struct holding all CLI arguments for `clap` crate to parse
#[derive(Parser, Debug)]
#[command(author, version, about, long_about)]
#[command(author, version, about, long_about, arg_required_else_help = true)]
pub struct CliArgs {
/// lowest log level to show log messages at higher levels will be omitted
/// Lowest log level to show
#[arg(long = "log-level", value_name = "LOG_LEVEL", group = "log-level")]
log_level: Option<log::LevelFilter>,
/// show verbose log output sets log level to "debug"
/// Show verbose log output sets log level to "debug"
#[arg(short, long, group = "log-level")]
verbose: bool,
/// show no log output sets log level to "error"
/// Show no log output sets log level to "error"
#[arg(short, long, group = "log-level")]
quiet: bool,
@@ -36,8 +59,42 @@ pub struct CliArgs {
#[cfg(feature = "experiment_api")]
api: crate::api::cli::ApiCli,
/// Path of the `wireguard_psk` broker socket to connect to
#[cfg(feature = "experiment_api")]
#[arg(long, group = "psk-broker-specs")]
psk_broker_path: Option<PathBuf>,
/// File descriptor of the `wireguard_psk` broker socket to connect to
///
/// When this command is called from another process, the other process can
/// open and bind the Unix socket for the PSK broker connection to use
/// themselves, passing it to this process - in Rust this can be achieved
/// using the [command-fds](https://docs.rs/command-fds/latest/command_fds/)
/// crate
#[cfg(feature = "experiment_api")]
#[arg(long, group = "psk-broker-specs")]
psk_broker_fd: Option<i32>,
/// Spawn a PSK broker locally using a socket pair
#[cfg(feature = "experiment_api")]
#[arg(short, long, group = "psk-broker-specs")]
psk_broker_spawn: bool,
#[command(subcommand)]
pub command: CliCommand,
pub command: Option<CliCommand>,
/// Generate man pages for the CLI
///
/// This option is used to generate man pages for Rosenpass in the specified
/// directory and exit.
#[clap(long, value_name = "out_dir")]
pub generate_manpage: Option<PathBuf>,
/// Generate completion file for a shell
///
/// This option is used to generate completion files for the specified shell
#[clap(long, value_name = "shell")]
pub print_completions: Option<clap_complete::Shell>,
}
impl CliArgs {
@@ -58,32 +115,54 @@ impl CliArgs {
return Some(log::LevelFilter::Info);
}
if self.quiet {
return Some(log::LevelFilter::Error);
return Some(log::LevelFilter::Warn);
}
if let Some(level_filter) = self.log_level {
return Some(level_filter);
}
None
}
#[cfg(feature = "experiment_api")]
/// returns the broker interface set by CLI args
/// returns `None` if the `experiment_api` feature isn't enabled
pub fn get_broker_interface(&self) -> Option<BrokerInterface> {
if let Some(path_ref) = self.psk_broker_path.as_ref() {
Some(BrokerInterface::Socket(path_ref.to_path_buf()))
} else if let Some(fd) = self.psk_broker_fd {
Some(BrokerInterface::FileDescriptor(fd))
} else if self.psk_broker_spawn {
Some(BrokerInterface::SocketPair)
} else {
None
}
}
#[cfg(not(feature = "experiment_api"))]
/// returns the broker interface set by CLI args
/// returns `None` if the `experiment_api` feature isn't enabled
pub fn get_broker_interface(&self) -> Option<BrokerInterface> {
None
}
}
/// represents a command specified via CLI
#[derive(Subcommand, Debug)]
pub enum CliCommand {
/// Start Rosenpass in server mode and carry on with the key exchange
/// Start Rosenpass key exchanges based on a configuration file
///
/// This will parse the configuration file and perform the key exchange
/// with the specified peers. If a peer's endpoint is specified, this
/// Rosenpass instance will try to initiate a key exchange with the peer,
/// otherwise only initiation attempts from the peer will be responded to.
/// This will parse the configuration file and perform key exchanges with
/// the specified peers. If a peer's endpoint is specified, this Rosenpass
/// instance will try to initiate a key exchange with the peer; otherwise,
/// only initiation attempts from other peers will be responded to.
ExchangeConfig { config_file: PathBuf },
/// Start in daemon mode, performing key exchanges
/// Start Rosenpass key exchanges based on command line arguments
///
/// The configuration is read from the command line. The `peer` token
/// always separates multiple peers, e. g. if the token `peer` appears
/// in the WIREGUARD_EXTRA_ARGS it is not put into the WireGuard arguments
/// but instead a new peer is created.
/// The configuration is read from the command line. The `peer` token always
/// separates multiple peers, e.g., if the token `peer` appears in the
/// WIREGUARD_EXTRA_ARGS, it is not put into the WireGuard arguments but
/// instead a new peer is created.
/* Explanation: `first_arg` and `rest_of_args` are combined into one
* `Vec<String>`. They are only used to trick clap into displaying some
* guidance on the CLI usage.
@@ -112,7 +191,10 @@ pub enum CliCommand {
config_file: Option<PathBuf>,
},
/// Generate a demo config file
/// Generate a demo config file for Rosenpass
///
/// The generated config file will contain a single peer and all common
/// options.
GenConfig {
config_file: PathBuf,
@@ -121,19 +203,19 @@ pub enum CliCommand {
force: bool,
},
/// Generate the keys mentioned in a configFile
/// Generate secret & public key for Rosenpass
///
/// Generates secret- & public-key to their destination. If a config file
/// is provided then the key file destination is taken from there.
/// Otherwise the
/// Generates secret & public key to their destination. If a config file is
/// provided then the key file destination is taken from there, otherwise
/// the destination is taken from the CLI arguments.
GenKeys {
config_file: Option<PathBuf>,
/// where to write public-key to
/// Where to write public key to
#[clap(short, long)]
public_key: Option<PathBuf>,
/// where to write secret-key to
/// Where to write secret key to
#[clap(short, long)]
secret_key: Option<PathBuf>,
@@ -142,51 +224,48 @@ pub enum CliCommand {
force: bool,
},
/// Deprecated - use gen-keys instead
/// Validate a configuration file
///
/// This command will validate the configuration file and print any errors
/// it finds. If the configuration file is valid, it will print a success.
/// Defined secret & public keys are checked for existence and validity.
Validate { config_files: Vec<PathBuf> },
/// DEPRECATED - use the gen-keys command instead
#[allow(rustdoc::broken_intra_doc_links)]
#[allow(rustdoc::invalid_html_tags)]
#[command(hide = true)]
Keygen {
// NOTE yes, the legacy keygen argument initially really accepted "privet-key", not "secret-key"!
// NOTE yes, the legacy keygen argument initially really accepted
// "private-key", not "secret-key"!
/// public-key <PATH> private-key <PATH>
args: Vec<String>,
},
/// Validate a configuration
Validate { config_files: Vec<PathBuf> },
/// Show the rosenpass manpage
// TODO make this the default, but only after the manpage has been adjusted once the CLI stabilizes
Man,
}
impl CliArgs {
/// runs the command specified via CLI
/// Runs the command specified via CLI
///
/// ## TODO
/// - This method consumes the [`CliCommand`] value. It might be wise to use a reference...
pub fn run(self, test_helpers: Option<AppServerTest>) -> anyhow::Result<()> {
pub fn run(
self,
broker_interface: Option<BrokerInterface>,
test_helpers: Option<AppServerTest>,
) -> anyhow::Result<()> {
use CliCommand::*;
match &self.command {
Man => {
let man_cmd = std::process::Command::new("man")
.args(["1", "rosenpass"])
.status();
if !(man_cmd.is_ok() && man_cmd.unwrap().success()) {
println!(include_str!(env!("ROSENPASS_MAN")));
}
}
GenConfig { config_file, force } => {
Some(GenConfig { config_file, force }) => {
ensure!(
*force || !config_file.exists(),
"config file {config_file:?} already exists"
);
config::Rosenpass::example_config().store(config_file)?;
std::fs::write(config_file, config::EXAMPLE_CONFIG)?;
}
// Deprecated - use gen-keys instead
Keygen { args } => {
Some(Keygen { args }) => {
log::warn!("The 'keygen' command is deprecated. Please use the 'gen-keys' command instead.");
let mut public_key: Option<PathBuf> = None;
@@ -219,12 +298,12 @@ impl CliArgs {
generate_and_save_keypair(secret_key.unwrap(), public_key.unwrap())?;
}
GenKeys {
Some(GenKeys {
config_file,
public_key,
secret_key,
force,
} => {
}) => {
// figure out where the key file is specified, in the config file or directly as flag?
let (pkf, skf) = match (config_file, public_key, secret_key) {
(Some(config_file), _, _) => {
@@ -234,8 +313,11 @@ impl CliArgs {
);
let config = config::Rosenpass::load(config_file)?;
let keypair = config
.keypair
.context("Config file present, but no keypair is specified.")?;
(config.public_key, config.secret_key)
(keypair.public_key, keypair.secret_key)
}
(_, Some(pkf), Some(skf)) => (pkf.clone(), skf.clone()),
_ => {
@@ -247,12 +329,14 @@ impl CliArgs {
let mut problems = vec![];
if !force && pkf.is_file() {
problems.push(format!(
"public-key file {pkf:?} exist, refusing to overwrite it"
"public-key file {:?} exists, refusing to overwrite",
std::fs::canonicalize(&pkf)?,
));
}
if !force && skf.is_file() {
problems.push(format!(
"secret-key file {skf:?} exist, refusing to overwrite it"
"secret-key file {:?} exists, refusing to overwrite",
std::fs::canonicalize(&skf)?,
));
}
if !problems.is_empty() {
@@ -263,7 +347,7 @@ impl CliArgs {
generate_and_save_keypair(skf, pkf)?;
}
ExchangeConfig { config_file } => {
Some(ExchangeConfig { config_file }) => {
ensure!(
config_file.exists(),
"config file '{config_file:?}' does not exist"
@@ -272,15 +356,16 @@ impl CliArgs {
let mut config = config::Rosenpass::load(config_file)?;
config.validate()?;
self.apply_to_config(&mut config)?;
config.check_usefullness()?;
Self::event_loop(config, test_helpers)?;
Self::event_loop(config, broker_interface, test_helpers)?;
}
Exchange {
Some(Exchange {
first_arg,
rest_of_args,
config_file,
} => {
}) => {
let mut rest_of_args = rest_of_args.clone();
rest_of_args.insert(0, first_arg.clone());
let args = rest_of_args;
@@ -292,24 +377,27 @@ impl CliArgs {
}
config.validate()?;
self.apply_to_config(&mut config)?;
config.check_usefullness()?;
Self::event_loop(config, test_helpers)?;
Self::event_loop(config, broker_interface, test_helpers)?;
}
Validate { config_files } => {
Some(Validate { config_files }) => {
for file in config_files {
match config::Rosenpass::load(file) {
Ok(config) => {
eprintln!("{file:?} is valid TOML and conforms to the expected schema");
match config.validate() {
Ok(_) => eprintln!("{file:?} has passed all logical checks"),
Err(_) => eprintln!("{file:?} contains logical errors"),
Err(err) => eprintln!("{file:?} contains logical errors: '{err}'"),
}
}
Err(e) => eprintln!("{file:?} is not valid: {e}"),
}
}
}
&None => {} // calp print help if no command is given
}
Ok(())
@@ -317,18 +405,25 @@ impl CliArgs {
fn event_loop(
config: config::Rosenpass,
broker_interface: Option<BrokerInterface>,
test_helpers: Option<AppServerTest>,
) -> anyhow::Result<()> {
const MAX_PSK_SIZE: usize = 1000;
// load own keys
let sk = SSk::load(&config.secret_key)?;
let pk = SPk::load(&config.public_key)?;
let keypair = config
.keypair
.as_ref()
.map(|kp| -> anyhow::Result<_> {
let sk = SSk::load(&kp.secret_key)?;
let pk = SPk::load(&kp.public_key)?;
Ok((sk, pk))
})
.transpose()?;
// start an application server
let mut srv = std::boxed::Box::<AppServer>::new(AppServer::new(
sk,
pk,
keypair,
config.listen.clone(),
config.verbosity,
test_helpers,
@@ -336,7 +431,8 @@ impl CliArgs {
config.apply_to_app_server(&mut srv)?;
let broker_store_ptr = srv.register_broker(Box::new(NativeUnixBroker::new()))?;
let broker = Self::create_broker(broker_interface)?;
let broker_store_ptr = srv.register_broker(broker)?;
fn cfg_err_map(e: NativeUnixBrokerConfigBaseBuilderError) -> anyhow::Error {
anyhow::Error::msg(format!("NativeUnixBrokerConfigBaseBuilderError: {:?}", e))
@@ -373,6 +469,83 @@ impl CliArgs {
srv.event_loop()
}
#[cfg(feature = "experiment_api")]
fn create_broker(
broker_interface: Option<BrokerInterface>,
) -> Result<
Box<dyn WireguardBrokerMio<MioError = anyhow::Error, Error = anyhow::Error>>,
anyhow::Error,
> {
if let Some(interface) = broker_interface {
let socket = Self::get_broker_socket(interface)?;
Ok(Box::new(MioBrokerClient::new(socket)))
} else {
Ok(Box::new(NativeUnixBroker::new()))
}
}
#[cfg(not(feature = "experiment_api"))]
fn create_broker(
_broker_interface: Option<BrokerInterface>,
) -> Result<Box<NativeUnixBroker>, anyhow::Error> {
Ok(Box::new(NativeUnixBroker::new()))
}
#[cfg(feature = "experiment_api")]
fn get_broker_socket(broker_interface: BrokerInterface) -> Result<UnixStream, anyhow::Error> {
// Connect to the psk broker unix socket if one was specified
// OR OTHERWISE spawn the psk broker and use socketpair(2) to connect with them
match broker_interface {
BrokerInterface::Socket(broker_path) => Ok(UnixStream::connect(broker_path)?),
BrokerInterface::FileDescriptor(broker_fd) => {
// mio::net::UnixStream doesn't implement From<OwnedFd>, so we have to go through std
let sock = net::UnixStream::from(claim_fd(broker_fd)?);
sock.set_nonblocking(true)?;
Ok(UnixStream::from_std(sock))
}
BrokerInterface::SocketPair => {
// Form a socketpair for communicating to the broker
let (ours, theirs) = socketpair(
AddressFamily::UNIX,
SocketType::STREAM,
SocketFlags::empty(),
None,
)?;
// Setup our end of the socketpair
let ours = net::UnixStream::from(ours);
ours.set_nonblocking(true)?;
// Start the PSK broker
let mut child = Command::new("rosenpass-wireguard-broker-socket-handler")
.args(["--stream-fd", "3"])
.fd_mappings(vec![FdMapping {
parent_fd: theirs.as_raw_fd(),
child_fd: 3,
}])?
.spawn()?;
// Handle the PSK broker crashing
thread::spawn(move || {
let status = child.wait();
if let Ok(status) = status {
if status.success() {
// Maybe they are doing double forking?
info!("PSK broker exited.");
} else {
error!("PSK broker exited with an error ({status:?})");
}
} else {
error!("Wait on PSK broker process failed ({status:?})");
}
});
Ok(UnixStream::from_std(ours))
}
}
}
}
/// generate secret and public keys, store in files according to the paths passed as arguments

View File

@@ -6,7 +6,8 @@
//! ## TODO
//! - support `~` in <https://github.com/rosenpass/rosenpass/issues/237>
//! - provide tooling to create config file from shell <https://github.com/rosenpass/rosenpass/issues/247>
use crate::protocol::{SPk, SSk};
use rosenpass_util::file::LoadValue;
use std::{
collections::HashSet,
fs,
@@ -21,16 +22,25 @@ use serde::{Deserialize, Serialize};
use crate::app_server::AppServer;
#[cfg(feature = "experiment_api")]
fn empty_api_config() -> crate::api::config::ApiConfig {
crate::api::config::ApiConfig {
listen_path: Vec::new(),
listen_fd: Vec::new(),
stream_fd: Vec::new(),
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct Rosenpass {
/// path to the public key file
pub public_key: PathBuf,
/// path to the secret key file
pub secret_key: PathBuf,
// TODO: Raise error if secret key or public key alone is set during deserialization
// SEE: https://github.com/serde-rs/serde/issues/2793
#[serde(flatten)]
pub keypair: Option<Keypair>,
/// Location of the API listen sockets
#[cfg(feature = "experiment_api")]
#[serde(default = "empty_api_config")]
pub api: crate::api::config::ApiConfig,
/// list of [`SocketAddr`] to listen on
@@ -58,6 +68,26 @@ pub struct Rosenpass {
pub config_file_path: PathBuf,
}
#[derive(Debug, Deserialize, Serialize, PartialEq, Eq, Clone)]
pub struct Keypair {
/// path to the public key file
pub public_key: PathBuf,
/// path to the secret key file
pub secret_key: PathBuf,
}
impl Keypair {
pub fn new<Pk: AsRef<Path>, Sk: AsRef<Path>>(public_key: Pk, secret_key: Sk) -> Self {
let public_key = public_key.as_ref().to_path_buf();
let secret_key = secret_key.as_ref().to_path_buf();
Self {
public_key,
secret_key,
}
}
}
/// ## TODO
/// - replace this type with [`log::LevelFilter`], also see <https://github.com/rosenpass/rosenpass/pull/246>
#[derive(Debug, PartialEq, Eq, Serialize, Deserialize, Copy, Clone)]
@@ -113,6 +143,12 @@ pub struct WireGuard {
pub extra_params: Vec<String>,
}
impl Default for Rosenpass {
fn default() -> Self {
Self::empty()
}
}
impl Rosenpass {
/// load configuration from a TOML file
///
@@ -128,8 +164,10 @@ impl Rosenpass {
// resolve `~` (see https://github.com/rosenpass/rosenpass/issues/237)
use util::resolve_path_with_tilde;
resolve_path_with_tilde(&mut config.public_key);
resolve_path_with_tilde(&mut config.secret_key);
if let Some(ref mut keypair) = config.keypair {
resolve_path_with_tilde(&mut keypair.public_key);
resolve_path_with_tilde(&mut keypair.secret_key);
}
for peer in config.peers.iter_mut() {
resolve_path_with_tilde(&mut peer.public_key);
if let Some(ref mut psk) = &mut peer.pre_shared_key {
@@ -170,24 +208,36 @@ impl Rosenpass {
}
/// Validate a configuration
///
/// ## TODO
/// - check that files do not just exist but are also readable
/// - warn if neither out_key nor exchange_command of a peer is defined (v.i.)
pub fn validate(&self) -> anyhow::Result<()> {
// check the public key file exists
ensure!(
self.public_key.is_file(),
"could not find public-key file {:?}: no such file",
self.public_key
);
if let Some(ref keypair) = self.keypair {
// check the public key file exists
ensure!(
keypair.public_key.is_file(),
"could not find public-key file {:?}: no such file. Consider running `rosenpass gen-keys` to generate a new keypair.",
keypair.public_key
);
// check the secret-key file exists
ensure!(
self.secret_key.is_file(),
"could not find secret-key file {:?}: no such file",
self.secret_key
);
// check the public-key file is a valid key
ensure!(
SPk::load(&keypair.public_key).is_ok(),
"could not load public-key file {:?}: invalid key",
keypair.public_key
);
// check the secret-key file exists
ensure!(
keypair.secret_key.is_file(),
"could not find secret-key file {:?}: no such file. Consider running `rosenpass gen-keys` to generate a new keypair.",
keypair.secret_key
);
// check the secret-key file is a valid key
ensure!(
SSk::load(&keypair.secret_key).is_ok(),
"could not load public-key file {:?}: invalid key",
keypair.secret_key
);
}
for (i, peer) in self.peers.iter().enumerate() {
// check peer's public-key file exists
@@ -197,6 +247,13 @@ impl Rosenpass {
peer.public_key
);
// check peer's public-key file is a valid key
ensure!(
SPk::load(&peer.public_key).is_ok(),
"peer {i} public-key file {:?} is invalid",
peer.public_key
);
// check endpoint is usable
if let Some(addr) = peer.endpoint.as_ref() {
ensure!(
@@ -206,17 +263,54 @@ impl Rosenpass {
);
}
// TODO warn if neither out_key nor exchange_command is defined
// check if `key_out` or `device` and `peer` are defined
if peer.key_out.is_none() {
if let Some(wg) = &peer.wg {
if wg.device.is_empty() || wg.peer.is_empty() {
ensure!(
false,
"peer {i} has neither `key_out` nor valid wireguard config defined"
);
}
} else {
ensure!(
false,
"peer {i} has neither `key_out` nor valid wireguard config defined"
);
}
}
}
Ok(())
}
pub fn check_usefullness(&self) -> anyhow::Result<()> {
#[cfg(not(feature = "experiment_api"))]
ensure!(self.keypair.is_some(), "Server keypair missing.");
#[cfg(feature = "experiment_api")]
ensure!(
self.keypair.is_some() || self.api.has_api_sources(),
"{}{}",
"Specify a server keypair or some API connections to configure the keypair with.",
"Without a keypair, rosenpass can not operate."
);
Ok(())
}
pub fn empty() -> Self {
Self::new(None)
}
pub fn from_sk_pk<Sk: AsRef<Path>, Pk: AsRef<Path>>(sk: Sk, pk: Pk) -> Self {
Self::new(Some(Keypair::new(pk, sk)))
}
/// Creates a new configuration
pub fn new<P1: AsRef<Path>, P2: AsRef<Path>>(public_key: P1, secret_key: P2) -> Self {
pub fn new(keypair: Option<Keypair>) -> Self {
Self {
public_key: PathBuf::from(public_key.as_ref()),
secret_key: PathBuf::from(secret_key.as_ref()),
keypair,
listen: vec![],
#[cfg(feature = "experiment_api")]
api: crate::api::config::ApiConfig::default(),
@@ -242,7 +336,7 @@ impl Rosenpass {
/// from chaotic args
/// Quest: the grammar is undecideable, what do we do here?
pub fn parse_args(args: Vec<String>) -> anyhow::Result<Self> {
let mut config = Self::new("", "");
let mut config = Self::new(Some(Keypair::new("", "")));
#[derive(Debug, Hash, PartialEq, Eq)]
enum State {
@@ -303,7 +397,7 @@ impl Rosenpass {
already_set.insert(OwnPublicKey),
"public-key was already set"
);
config.public_key = pk.into();
config.keypair.as_mut().unwrap().public_key = pk.into();
Own
}
(OwnSecretKey, sk, None) => {
@@ -311,7 +405,7 @@ impl Rosenpass {
already_set.insert(OwnSecretKey),
"secret-key was already set"
);
config.secret_key = sk.into();
config.keypair.as_mut().unwrap().secret_key = sk.into();
Own
}
(OwnListen, l, None) => {
@@ -430,45 +524,146 @@ impl Rosenpass {
}
}
impl Rosenpass {
/// Generate an example configuration
pub fn example_config() -> Self {
let peer = RosenpassPeer {
public_key: "/path/to/rp-peer-public-key".into(),
endpoint: Some("my-peer.test:9999".into()),
key_out: Some("/path/to/rp-key-out.txt".into()),
pre_shared_key: Some("additional pre shared key".into()),
wg: Some(WireGuard {
device: "wirgeguard device e.g. wg0".into(),
peer: "wireguard public key".into(),
extra_params: vec!["passed to".into(), "wg set".into()],
}),
};
Self {
public_key: "/path/to/rp-public-key".into(),
secret_key: "/path/to/rp-secret-key".into(),
peers: vec![peer],
..Self::new("", "")
}
}
}
impl Default for Verbosity {
fn default() -> Self {
Self::Quiet
}
}
pub static EXAMPLE_CONFIG: &str = r###"public_key = "/path/to/rp-public-key"
secret_key = "/path/to/rp-secret-key"
listen = []
verbosity = "Verbose"
[[peers]]
# Commented out fields are optional
public_key = "/path/to/rp-peer-public-key"
endpoint = "127.0.0.1:9998"
# pre_shared_key = "/path/to/preshared-key"
# Choose to store the key in a file via `key_out` or pass it to WireGuard by
# defining `device` and `peer`. You may choose to do both.
key_out = "/path/to/rp-key-out.txt" # path to store the key
# device = "wg0" # WireGuard interface
#peer = "RULdRAtUw7SFfVfGD..." # WireGuard public key
# extra_params = [] # passed to WireGuard `wg set`
"###;
#[cfg(test)]
mod test {
use super::*;
use std::net::IpAddr;
use std::{borrow::Borrow, net::IpAddr};
fn toml_des<S: Borrow<str>>(s: S) -> Result<toml::Table, toml::de::Error> {
toml::from_str(s.borrow())
}
fn toml_ser<S: Serialize>(s: S) -> Result<toml::Table, toml::ser::Error> {
toml::Table::try_from(s)
}
fn assert_toml<L: Serialize, R: Borrow<str>>(l: L, r: R, info: &str) -> anyhow::Result<()> {
fn lines_prepend(prefix: &str, s: &str) -> anyhow::Result<String> {
use std::fmt::Write;
let mut buf = String::new();
for line in s.lines() {
writeln!(&mut buf, "{prefix}{line}")?;
}
Ok(buf)
}
let l = toml_ser(l)?;
let r = toml_des(r.borrow())?;
ensure!(
l == r,
"{}{}TOML value mismatch.\n Have:\n{}\n Expected:\n{}",
info,
if info.is_empty() { "" } else { ": " },
lines_prepend(" ", &toml::to_string_pretty(&l)?)?,
lines_prepend(" ", &toml::to_string_pretty(&r)?)?
);
Ok(())
}
fn assert_toml_round<'de, L: Serialize + Deserialize<'de>, R: Borrow<str>>(
l: L,
r: R,
) -> anyhow::Result<()> {
let l = toml_ser(l)?;
assert_toml(&l, r.borrow(), "Straight deserialization")?;
let l: L = l.try_into().unwrap();
let l = toml_ser(l).unwrap();
assert_toml(l, r.borrow(), "Roundtrip deserialization")?;
Ok(())
}
fn split_str(s: &str) -> Vec<String> {
s.split(' ').map(|s| s.to_string()).collect()
}
#[test]
fn toml_serialization() -> anyhow::Result<()> {
#[cfg(feature = "experiment_api")]
assert_toml_round(
Rosenpass::empty(),
r#"
listen = []
verbosity = "Quiet"
peers = []
[api]
listen_path = []
listen_fd = []
stream_fd = []
"#,
)?;
#[cfg(not(feature = "experiment_api"))]
assert_toml_round(
Rosenpass::empty(),
r#"
listen = []
verbosity = "Quiet"
peers = []
"#,
)?;
#[cfg(feature = "experiment_api")]
assert_toml_round(
Rosenpass::from_sk_pk("/my/sk", "/my/pk"),
r#"
public_key = "/my/pk"
secret_key = "/my/sk"
listen = []
verbosity = "Quiet"
peers = []
[api]
listen_path = []
listen_fd = []
stream_fd = []
"#,
)?;
#[cfg(not(feature = "experiment_api"))]
assert_toml_round(
Rosenpass::from_sk_pk("/my/sk", "/my/pk"),
r#"
public_key = "/my/pk"
secret_key = "/my/sk"
listen = []
verbosity = "Quiet"
peers = []
"#,
)?;
Ok(())
}
#[test]
fn test_simple_cli_parse() {
let args = split_str(
@@ -479,8 +674,10 @@ mod test {
let config = Rosenpass::parse_args(args).unwrap();
assert_eq!(config.public_key, PathBuf::from("/my/public-key"));
assert_eq!(config.secret_key, PathBuf::from("/my/secret-key"));
assert_eq!(
config.keypair,
Some(Keypair::new("/my/public-key", "/my/secret-key"))
);
assert_eq!(config.verbosity, Verbosity::Verbose);
assert_eq!(
&config.listen,
@@ -509,8 +706,10 @@ mod test {
let config = Rosenpass::parse_args(args).unwrap();
assert_eq!(config.public_key, PathBuf::from("/my/public-key"));
assert_eq!(config.secret_key, PathBuf::from("/my/secret-key"));
assert_eq!(
config.keypair,
Some(Keypair::new("/my/public-key", "/my/secret-key"))
);
assert_eq!(config.verbosity, Verbosity::Verbose);
assert!(&config.listen.is_empty());
assert_eq!(

View File

@@ -1,13 +1,51 @@
use clap::CommandFactory;
use clap::Parser;
use clap_mangen::roff::{roman, Roff};
use log::error;
use rosenpass::cli::CliArgs;
use std::process::exit;
fn print_custom_man_section(section: &str, text: &str, file: &mut std::fs::File) {
let mut roff = Roff::default();
roff.control("SH", [section]);
roff.text([roman(text)]);
let _ = roff.to_writer(file);
}
/// Catches errors, prints them through the logger, then exits
pub fn main() {
// parse CLI arguments
let args = CliArgs::parse();
if let Some(shell) = args.print_completions {
let mut cmd = CliArgs::command();
clap_complete::generate(shell, &mut cmd, "rosenpass", &mut std::io::stdout());
return;
}
if let Some(out_dir) = args.generate_manpage {
std::fs::create_dir_all(&out_dir).expect("Failed to create man pages directory");
let cmd = CliArgs::command();
let man = clap_mangen::Man::new(cmd.clone());
let _ = clap_mangen::generate_to(cmd, &out_dir);
let file_path = out_dir.join("rosenpass.1");
let mut file = std::fs::File::create(file_path).expect("Failed to create man page file");
let _ = man.render_title(&mut file);
let _ = man.render_name_section(&mut file);
let _ = man.render_synopsis_section(&mut file);
let _ = man.render_subcommands_section(&mut file);
let _ = man.render_options_section(&mut file);
print_custom_man_section("EXIT STATUS", EXIT_STATUS_MAN, &mut file);
print_custom_man_section("SEE ALSO", SEE_ALSO_MAN, &mut file);
print_custom_man_section("STANDARDS", STANDARDS_MAN, &mut file);
print_custom_man_section("AUTHORS", AUTHORS_MAN, &mut file);
print_custom_man_section("BUGS", BUGS_MAN, &mut file);
return;
}
{
use rosenpass_secret_memory as SM;
#[cfg(feature = "experiment_memfd_secret")]
@@ -34,7 +72,8 @@ pub fn main() {
// error!("error dummy");
}
match args.run(None) {
let broker_interface = args.get_broker_interface();
match args.run(broker_interface, None) {
Ok(_) => {}
Err(e) => {
error!("{e:?}");
@@ -42,3 +81,21 @@ pub fn main() {
}
}
}
static EXIT_STATUS_MAN: &str = r"
The rosenpass utility exits 0 on success, and >0 if an error occurs.";
static SEE_ALSO_MAN: &str = r"
rp(1), wg(1)
Karolin Varner, Benjamin Lipp, Wanja Zaeske, and Lisa Schmidt, Rosenpass, https://rosenpass.eu/whitepaper.pdf, 2023.";
static STANDARDS_MAN: &str = r"
This tool is the reference implementation of the Rosenpass protocol, as
specified within the whitepaper referenced above.";
static AUTHORS_MAN: &str = r"
Rosenpass was created by Karolin Varner, Benjamin Lipp, Wanja Zaeske, Marei
Peischl, Stephan Ajuvo, and Lisa Schmidt.";
static BUGS_MAN: &str = r"
The bugs are tracked at https://github.com/rosenpass/rosenpass/issues.";

View File

@@ -0,0 +1,127 @@
use rosenpass_util::{
build::Build,
mem::{DiscardResultExt, SwapWithDefaultExt},
result::ensure_or,
};
use thiserror::Error;
use super::{CryptoServer, PeerPtr, SPk, SSk, SymKey};
#[derive(Debug, Clone)]
pub struct Keypair {
pub sk: SSk,
pub pk: SPk,
}
// TODO: We need a named tuple derive
impl Keypair {
pub fn new(sk: SSk, pk: SPk) -> Self {
Self { sk, pk }
}
pub fn zero() -> Self {
Self::new(SSk::zero(), SPk::zero())
}
pub fn random() -> Self {
Self::new(SSk::random(), SPk::random())
}
pub fn from_parts(parts: (SSk, SPk)) -> Self {
Self::new(parts.0, parts.1)
}
pub fn into_parts(self) -> (SSk, SPk) {
(self.sk, self.pk)
}
}
#[derive(Error, Debug)]
#[error("PSK already set in BuildCryptoServer")]
pub struct PskAlreadySet;
#[derive(Error, Debug)]
#[error("Keypair already set in BuildCryptoServer")]
pub struct KeypairAlreadySet;
#[derive(Error, Debug)]
#[error("Can not construct CryptoServer: Missing keypair")]
pub struct MissingKeypair;
#[derive(Debug, Default)]
pub struct BuildCryptoServer {
pub keypair: Option<Keypair>,
pub peers: Vec<PeerParams>,
}
impl Build<CryptoServer> for BuildCryptoServer {
type Error = anyhow::Error;
fn build(self) -> Result<CryptoServer, Self::Error> {
let Some(Keypair { sk, pk }) = self.keypair else {
return Err(MissingKeypair)?;
};
let mut srv = CryptoServer::new(sk, pk);
for (idx, PeerParams { psk, pk }) in self.peers.into_iter().enumerate() {
let PeerPtr(idx2) = srv.add_peer(psk, pk)?;
assert!(idx == idx2, "Peer id changed during CryptoServer construction from {idx} to {idx2}. This is a developer error.")
}
Ok(srv)
}
}
#[derive(Debug)]
pub struct PeerParams {
pub psk: Option<SymKey>,
pub pk: SPk,
}
impl BuildCryptoServer {
pub fn new(keypair: Option<Keypair>, peers: Vec<PeerParams>) -> Self {
Self { keypair, peers }
}
pub fn empty() -> Self {
Self::new(None, Vec::new())
}
pub fn from_parts(parts: (Option<Keypair>, Vec<PeerParams>)) -> Self {
Self {
keypair: parts.0,
peers: parts.1,
}
}
pub fn take_parts(&mut self) -> (Option<Keypair>, Vec<PeerParams>) {
(self.keypair.take(), self.peers.swap_with_default())
}
pub fn into_parts(mut self) -> (Option<Keypair>, Vec<PeerParams>) {
self.take_parts()
}
pub fn with_keypair(&mut self, keypair: Keypair) -> Result<&mut Self, KeypairAlreadySet> {
ensure_or(self.keypair.is_none(), KeypairAlreadySet)?;
self.keypair.insert(keypair).discard_result();
Ok(self)
}
pub fn with_added_peer(&mut self, psk: Option<SymKey>, pk: SPk) -> &mut Self {
// TODO: Check here already whether peer was already added
self.peers.push(PeerParams { psk, pk });
self
}
pub fn add_peer(&mut self, psk: Option<SymKey>, pk: SPk) -> PeerPtr {
let id = PeerPtr(self.peers.len());
self.with_added_peer(psk, pk);
id
}
pub fn emancipate(&mut self) -> Self {
Self::from_parts(self.take_parts())
}
}

View File

@@ -0,0 +1,6 @@
mod build_crypto_server;
#[allow(clippy::module_inception)]
mod protocol;
pub use build_crypto_server::*;
pub use protocol::*;

View File

@@ -91,19 +91,13 @@ use rosenpass_ciphers::kem::{EphemeralKem, StaticKem};
use rosenpass_ciphers::{aead, xaead, KEY_LEN};
use rosenpass_constant_time as constant_time;
use rosenpass_secret_memory::{Public, PublicBox, Secret};
use rosenpass_util::{cat, mem::cpy_min, ord::max_usize, time::Timebase};
use rosenpass_util::{cat, mem::cpy_min, time::Timebase};
use zerocopy::{AsBytes, FromBytes, Ref};
use crate::{hash_domains, msgs::*, RosenpassError};
// CONSTANTS & SETTINGS //////////////////////////
/// Size required to fit any message in binary form
pub const RTX_BUFFER_SIZE: usize = max_usize(
size_of::<Envelope<InitHello>>(),
size_of::<Envelope<InitConf>>(),
);
/// A type for time, e.g. for backoff before re-tries
pub type Timing = f64;
@@ -140,11 +134,10 @@ pub const PEER_COOKIE_VALUE_EPOCH: Timing = 120.0;
// decryption for a second epoch
pub const BISCUIT_EPOCH: Timing = 300.0;
// Retransmission pub constants; will retransmit for up to _ABORT ms; starting with a delay of
// _DELAY_BEG ms and increasing the delay exponentially by a factor of
// _DELAY_GROWTH up to _DELAY_END. An additional jitter factor of ±_DELAY_JITTER
// is added.
pub const RETRANSMIT_ABORT: Timing = 120.0;
// Retransmission pub constants; will retransmit for up to _ABORT seconds;
// starting with a delay of _DELAY_BEGIN seconds and increasing the delay
// exponentially by a factor of _DELAY_GROWTH up to _DELAY_END.
// An additional jitter factor of ±_DELAY_JITTER is added.
pub const RETRANSMIT_DELAY_GROWTH: Timing = 2.0;
pub const RETRANSMIT_DELAY_BEGIN: Timing = 0.5;
pub const RETRANSMIT_DELAY_END: Timing = 10.0;
@@ -1479,7 +1472,7 @@ impl IniHsPtr {
.min(ih.tx_count as f64),
)
* RETRANSMIT_DELAY_JITTER
* (rand::random::<f64>() + 1.0); // TODO: Replace with the rand crate
* (rand::random::<f64>() + 1.0);
ih.tx_count += 1;
Ok(())
}
@@ -2016,8 +2009,7 @@ impl CryptoServer {
// Send ack Implementing sending the empty acknowledgement here
// instead of a generic PeerPtr::send(&Server, Option<&[u8]>) -> Either<EmptyData, Data>
// because data transmission is a stub currently. This software is supposed to be used
// as a key exchange service feeding a PSK into some classical (i.e. non post quantum)
// because data transmission is a stub currently.
let ses = peer
.session()
.get_mut(self)

View File

@@ -0,0 +1,332 @@
use std::{
borrow::Borrow,
io::{BufRead, BufReader, Write},
os::unix::net::UnixStream,
process::Stdio,
thread::sleep,
time::Duration,
};
use anyhow::{bail, Context};
use command_fds::{CommandFdExt, FdMapping};
use hex_literal::hex;
use rosenpass::api::{
self, add_listen_socket_response_status, add_psk_broker_response_status,
supply_keypair_response_status,
};
use rosenpass_util::{
b64::B64Display,
file::LoadValueB64,
io::IoErrorKind,
length_prefix_encoding::{decoder::LengthPrefixDecoder, encoder::LengthPrefixEncoder},
mem::{DiscardResultExt, MoveExt},
mio::WriteWithFileDescriptors,
zerocopy::ZerocopySliceExt,
};
use std::os::fd::{AsFd, AsRawFd};
use tempfile::TempDir;
use zerocopy::AsBytes;
use rosenpass::protocol::SymKey;
struct KillChild(std::process::Child);
impl Drop for KillChild {
fn drop(&mut self) {
self.0.kill().discard_result();
self.0.wait().discard_result()
}
}
#[test]
fn api_integration_api_setup() -> anyhow::Result<()> {
rosenpass_secret_memory::policy::secret_policy_use_only_malloc_secrets();
let dir = TempDir::with_prefix("rosenpass-api-integration-test")?;
macro_rules! tempfile {
($($lst:expr),+) => {{
let mut buf = dir.path().to_path_buf();
$(buf.push($lst);)*
buf
}}
}
let peer_a_endpoint = "[::1]:0";
let peer_a_listen = std::net::UdpSocket::bind(peer_a_endpoint)?;
let peer_a_endpoint = format!("{}", peer_a_listen.local_addr()?);
let peer_a_keypair = config::Keypair::new(tempfile!("a.pk"), tempfile!("a.sk"));
let peer_b_osk = tempfile!("b.osk");
let peer_b_wg_device = "mock_device";
let peer_b_wg_peer_id = hex!(
"
93 0f ee 77 0c 6b 54 7e 13 5f 13 92 21 97 26 53
7d 77 4a 6a 0f 6c eb 1a dd 6e 5b c4 1b 92 cd 99
"
);
use rosenpass::config;
let peer_a = config::Rosenpass {
config_file_path: tempfile!("a.config"),
keypair: None,
listen: vec![], // TODO: This could collide by accident
verbosity: config::Verbosity::Verbose,
api: api::config::ApiConfig {
listen_path: vec![tempfile!("a.sock")],
listen_fd: vec![],
stream_fd: vec![],
},
peers: vec![config::RosenpassPeer {
public_key: tempfile!("b.pk"),
key_out: None,
endpoint: None,
pre_shared_key: None,
wg: Some(config::WireGuard {
device: peer_b_wg_device.to_string(),
peer: format!("{}", peer_b_wg_peer_id.fmt_b64::<8129>()),
extra_params: vec![],
}),
}],
};
let peer_b_keypair = config::Keypair::new(tempfile!("b.pk"), tempfile!("b.sk"));
let peer_b = config::Rosenpass {
config_file_path: tempfile!("b.config"),
keypair: Some(peer_b_keypair.clone()),
listen: vec![],
verbosity: config::Verbosity::Verbose,
api: api::config::ApiConfig {
listen_path: vec![tempfile!("b.sock")],
listen_fd: vec![],
stream_fd: vec![],
},
peers: vec![config::RosenpassPeer {
public_key: tempfile!("a.pk"),
key_out: Some(peer_b_osk.clone()),
endpoint: Some(peer_a_endpoint.to_owned()),
pre_shared_key: None,
wg: None,
}],
};
// Generate the keys
rosenpass::cli::testing::generate_and_save_keypair(
peer_a_keypair.secret_key.clone(),
peer_a_keypair.public_key.clone(),
)?;
rosenpass::cli::testing::generate_and_save_keypair(
peer_b_keypair.secret_key.clone(),
peer_b_keypair.public_key.clone(),
)?;
// Write the configuration files
peer_a.commit()?;
peer_b.commit()?;
let (deliberate_fail_api_client, deliberate_fail_api_server) =
std::os::unix::net::UnixStream::pair()?;
let deliberate_fail_child_fd = 3;
// Start peer a
let _proc_a = KillChild(
std::process::Command::new(env!("CARGO_BIN_EXE_rosenpass"))
.args(["--api-stream-fd", &deliberate_fail_child_fd.to_string()])
.fd_mappings(vec![FdMapping {
parent_fd: deliberate_fail_api_server.move_here().as_raw_fd(),
child_fd: 3,
}])?
.args([
"exchange-config",
peer_a.config_file_path.to_str().context("")?,
])
.stdin(Stdio::null())
.stdout(Stdio::null())
.spawn()?,
);
// Start peer b
let mut proc_b = KillChild(
std::process::Command::new(env!("CARGO_BIN_EXE_rosenpass"))
.args([
"exchange-config",
peer_b.config_file_path.to_str().context("")?,
])
.stdin(Stdio::null())
.stderr(Stdio::null())
.stdout(Stdio::piped())
.spawn()?,
);
// Acquire stdout
let mut out_b = BufReader::new(proc_b.0.stdout.take().context("")?).lines();
// Now connect to the peers
let api_path = peer_a.api.listen_path[0].as_path();
// Wait for the socket to be created
let attempt = 0;
while !api_path.exists() {
sleep(Duration::from_millis(200));
assert!(
attempt < 50,
"Api failed to be created even after 50 seconds"
);
}
let api = UnixStream::connect(api_path)?;
let (psk_broker_sock, psk_broker_server_sock) = UnixStream::pair()?;
// Send AddListenSocket request
{
let fd = peer_a_listen.as_fd();
let mut fds = vec![&fd].into();
let mut api = WriteWithFileDescriptors::<UnixStream, _, _, _>::new(&api, &mut fds);
LengthPrefixEncoder::from_message(api::AddListenSocketRequest::new().as_bytes())
.write_all_to_stdio(&mut api)?;
assert!(fds.is_empty(), "Failed to write all file descriptors");
std::mem::forget(peer_a_listen);
}
// Read response
{
let mut decoder = LengthPrefixDecoder::new([0u8; api::MAX_RESPONSE_LEN]);
let res = decoder.read_all_from_stdio(&api)?;
let res = res.zk_parse::<api::AddListenSocketResponse>()?;
assert_eq!(
*res,
api::AddListenSocketResponse::new(add_listen_socket_response_status::OK)
);
}
// Deliberately break API connection given via FD; this checks that the
// API connections are closed when invalid data is received and it also
// implicitly checks that other connections are unaffected
{
use std::io::ErrorKind as K;
let client = deliberate_fail_api_client;
let err = loop {
if let Err(e) = client.borrow().write(&[0xffu8; 16]) {
break e;
}
};
// NotConnected happens on Mac
assert!(matches!(
err.io_error_kind(),
K::ConnectionReset | K::BrokenPipe | K::NotConnected
));
}
// Send SupplyKeypairRequest
{
use rustix::fs::{open, Mode, OFlags};
let sk = open(peer_a_keypair.secret_key, OFlags::RDONLY, Mode::empty())?;
let pk = open(peer_a_keypair.public_key, OFlags::RDONLY, Mode::empty())?;
let mut fds = vec![&sk, &pk].into();
let mut api = WriteWithFileDescriptors::<UnixStream, _, _, _>::new(&api, &mut fds);
LengthPrefixEncoder::from_message(api::SupplyKeypairRequest::new().as_bytes())
.write_all_to_stdio(&mut api)?;
assert!(fds.is_empty(), "Failed to write all file descriptors");
}
// Read response
{
let mut decoder = LengthPrefixDecoder::new([0u8; api::MAX_RESPONSE_LEN]);
let res = decoder.read_all_from_stdio(&api)?;
let res = res.zk_parse::<api::SupplyKeypairResponse>()?;
assert_eq!(
*res,
api::SupplyKeypairResponse::new(supply_keypair_response_status::OK)
);
}
// Send AddPskBroker request
{
let mut fds = vec![psk_broker_server_sock.as_fd()].into();
let mut api = WriteWithFileDescriptors::<UnixStream, _, _, _>::new(&api, &mut fds);
LengthPrefixEncoder::from_message(api::AddPskBrokerRequest::new().as_bytes())
.write_all_to_stdio(&mut api)?;
assert!(fds.is_empty(), "Failed to write all file descriptors");
}
// Read response
{
let mut decoder = LengthPrefixDecoder::new([0u8; api::MAX_RESPONSE_LEN]);
let res = decoder.read_all_from_stdio(&api)?;
let res = res.zk_parse::<api::AddPskBrokerResponse>()?;
assert_eq!(
*res,
api::AddPskBrokerResponse::new(add_psk_broker_response_status::OK)
);
}
// Wait for the keys to successfully exchange a key
let mut attempt = 0;
loop {
// Read OSK generated by A
let osk_a = {
use rosenpass_wireguard_broker::api::msgs as M;
type SetPskReqPkg = M::Envelope<M::SetPskRequest>;
type SetPskResPkg = M::Envelope<M::SetPskResponse>;
// Receive request
let mut decoder = LengthPrefixDecoder::new([0u8; M::REQUEST_MSG_BUFFER_SIZE]);
let req = decoder.read_all_from_stdio(&psk_broker_sock)?;
let req = req.zk_parse::<SetPskReqPkg>()?;
assert_eq!(req.msg_type, M::MsgType::SetPsk as u8);
assert_eq!(req.payload.peer_id, peer_b_wg_peer_id);
assert_eq!(req.payload.iface()?, peer_b_wg_device);
// Send response
let res = SetPskResPkg {
msg_type: M::MsgType::SetPsk as u8,
reserved: [0u8; 3],
payload: M::SetPskResponse {
return_code: M::SetPskResponseReturnCode::Success as u8,
},
};
LengthPrefixEncoder::from_message(res.as_bytes())
.write_all_to_stdio(&psk_broker_sock)?;
SymKey::from_slice(&req.payload.psk)
};
// Read OSK generated by B
let osk_b = {
let line = out_b.next().context("")??;
let words = line.split(' ').collect::<Vec<_>>();
// FIXED FIXED PEER-ID FIXED FILENAME STATUS
// output-key peer KZqXTZ4l2aNnkJtLPhs4D8JxHTGmRSL9w3Qr+X8JxFk= key-file "client-A-osk" exchanged
let peer_id = words
.get(2)
.with_context(|| format!("Bad rosenpass output: `{line}`"))?;
assert_eq!(
line,
format!(
"output-key peer {peer_id} key-file \"{}\" exchanged",
peer_b_osk.to_str().context("")?
)
);
SymKey::load_b64::<64, _>(peer_b_osk.clone())?
};
// TODO: This may be flaky. Both rosenpass instances are not guaranteed to produce
// the same number of output events; they merely guarantee eventual consistency of OSK.
// Correctly, we should use tokio to read any number of generated OSKs and indicate
// success on consensus
match osk_a.secret() == osk_b.secret() {
true => break,
false if attempt > 10 => bail!("Peers did not produce a matching key even after ten attempts. Something is wrong with the key exchange!"),
false => {},
};
attempt += 1;
}
Ok(())
}

View File

@@ -8,16 +8,25 @@ use std::{
use anyhow::{bail, Context};
use rosenpass::api;
use rosenpass_to::{ops::copy_slice_least_src, To};
use rosenpass_util::zerocopy::ZerocopySliceExt;
use rosenpass_util::{
file::LoadValueB64,
length_prefix_encoding::{decoder::LengthPrefixDecoder, encoder::LengthPrefixEncoder},
};
use rosenpass_util::{mem::DiscardResultExt, zerocopy::ZerocopySliceExt};
use tempfile::TempDir;
use zerocopy::AsBytes;
use rosenpass::protocol::SymKey;
struct KillChild(std::process::Child);
impl Drop for KillChild {
fn drop(&mut self) {
self.0.kill().discard_result();
self.0.wait().discard_result()
}
}
#[test]
fn api_integration_test() -> anyhow::Result<()> {
rosenpass_secret_memory::policy::secret_policy_use_only_malloc_secrets();
@@ -37,10 +46,11 @@ fn api_integration_test() -> anyhow::Result<()> {
let peer_b_osk = tempfile!("b.osk");
use rosenpass::config;
let peer_a_keypair = config::Keypair::new(tempfile!("a.pk"), tempfile!("a.sk"));
let peer_a = config::Rosenpass {
config_file_path: tempfile!("a.config"),
secret_key: tempfile!("a.sk"),
public_key: tempfile!("a.pk"),
keypair: Some(peer_a_keypair.clone()),
listen: peer_a_endpoint.to_socket_addrs()?.collect(), // TODO: This could collide by accident
verbosity: config::Verbosity::Verbose,
api: api::config::ApiConfig {
@@ -57,10 +67,10 @@ fn api_integration_test() -> anyhow::Result<()> {
}],
};
let peer_b_keypair = config::Keypair::new(tempfile!("b.pk"), tempfile!("b.sk"));
let peer_b = config::Rosenpass {
config_file_path: tempfile!("b.config"),
secret_key: tempfile!("b.sk"),
public_key: tempfile!("b.pk"),
keypair: Some(peer_b_keypair.clone()),
listen: vec![],
verbosity: config::Verbosity::Verbose,
api: api::config::ApiConfig {
@@ -79,12 +89,12 @@ fn api_integration_test() -> anyhow::Result<()> {
// Generate the keys
rosenpass::cli::testing::generate_and_save_keypair(
peer_a.secret_key.clone(),
peer_a.public_key.clone(),
peer_a_keypair.secret_key.clone(),
peer_a_keypair.public_key.clone(),
)?;
rosenpass::cli::testing::generate_and_save_keypair(
peer_b.secret_key.clone(),
peer_b.public_key.clone(),
peer_b_keypair.secret_key.clone(),
peer_b_keypair.public_key.clone(),
)?;
// Write the configuration files
@@ -92,28 +102,32 @@ fn api_integration_test() -> anyhow::Result<()> {
peer_b.commit()?;
// Start peer a
let proc_a = std::process::Command::new(env!("CARGO_BIN_EXE_rosenpass"))
.args([
"exchange-config",
peer_a.config_file_path.to_str().context("")?,
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.spawn()?;
let mut proc_a = KillChild(
std::process::Command::new(env!("CARGO_BIN_EXE_rosenpass"))
.args([
"exchange-config",
peer_a.config_file_path.to_str().context("")?,
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.spawn()?,
);
// Start peer b
let proc_b = std::process::Command::new(env!("CARGO_BIN_EXE_rosenpass"))
.args([
"exchange-config",
peer_b.config_file_path.to_str().context("")?,
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.spawn()?;
let mut proc_b = KillChild(
std::process::Command::new(env!("CARGO_BIN_EXE_rosenpass"))
.args([
"exchange-config",
peer_b.config_file_path.to_str().context("")?,
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.spawn()?,
);
// Acquire stdout
let mut out_a = BufReader::new(proc_a.stdout.context("")?).lines();
let mut out_b = BufReader::new(proc_b.stdout.context("")?).lines();
let mut out_a = BufReader::new(proc_a.0.stdout.take().context("")?).lines();
let mut out_b = BufReader::new(proc_b.0.stdout.take().context("")?).lines();
// Wait for the keys to successfully exchange a key
let mut attempt = 0;

View File

@@ -1,3 +1,4 @@
use std::fs::File;
use std::{
fs,
net::UdpSocket,
@@ -5,9 +6,10 @@ use std::{
sync::{Arc, Mutex},
time::Duration,
};
use tempfile::tempdir;
use clap::Parser;
use rosenpass::{app_server::AppServerTestBuilder, cli::CliArgs};
use rosenpass::{app_server::AppServerTestBuilder, cli::CliArgs, config::EXAMPLE_CONFIG};
use rosenpass_secret_memory::{Public, Secret};
use rosenpass_wireguard_broker::{WireguardBrokerMio, WG_KEY_LEN, WG_PEER_LEN};
use serial_test::serial;
@@ -108,7 +110,7 @@ fn run_server_client_exchange(
.termination_handler(Some(server_terminate_rx))
.build()
.unwrap();
cli.run(Some(test_helpers)).unwrap();
cli.run(None, Some(test_helpers)).unwrap();
});
let cli = CliArgs::try_parse_from(
@@ -123,7 +125,7 @@ fn run_server_client_exchange(
.termination_handler(Some(client_terminate_rx))
.build()
.unwrap();
cli.run(Some(test_helpers)).unwrap();
cli.run(None, Some(test_helpers)).unwrap();
});
// give them some time to do the key exchange under load
@@ -134,6 +136,46 @@ fn run_server_client_exchange(
client_terminate.send(()).unwrap();
}
// verify that EXAMPLE_CONFIG is correct
#[test]
fn check_example_config() {
setup_tests();
setup_logging();
let tmp_dir = tempdir().unwrap();
let config_path = tmp_dir.path().join("config.toml");
let mut config_file = File::create(config_path.to_owned()).unwrap();
config_file
.write_all(
EXAMPLE_CONFIG
.replace("/path/to", tmp_dir.path().to_str().unwrap())
.as_bytes(),
)
.unwrap();
let output = test_bin::get_test_bin(BIN)
.args(["gen-keys"])
.arg(&config_path)
.output()
.expect("EXAMPLE_CONFIG not valid");
fs::copy(
tmp_dir.path().join("rp-public-key"),
tmp_dir.path().join("rp-peer-public-key"),
)
.unwrap();
let output = test_bin::get_test_bin(BIN)
.args(["validate"])
.arg(&config_path)
.output()
.expect("EXAMPLE_CONFIG not valid");
let stderr = String::from_utf8_lossy(&output.stderr);
assert!(stderr.contains("has passed all logical checks"));
}
// check that we can exchange keys
#[test]
#[serial]
@@ -293,6 +335,7 @@ struct MockBrokerInner {
#[derive(Debug, Default)]
struct MockBroker {
inner: Arc<Mutex<MockBrokerInner>>,
mio_token: Option<mio::Token>,
}
impl WireguardBrokerMio for MockBroker {
@@ -301,8 +344,9 @@ impl WireguardBrokerMio for MockBroker {
fn register(
&mut self,
_registry: &mio::Registry,
_token: mio::Token,
token: mio::Token,
) -> Result<(), Self::MioError> {
self.mio_token = Some(token);
Ok(())
}
@@ -311,8 +355,13 @@ impl WireguardBrokerMio for MockBroker {
}
fn unregister(&mut self, _registry: &mio::Registry) -> Result<(), Self::MioError> {
self.mio_token = None;
Ok(())
}
fn mio_token(&self) -> Option<mio::Token> {
self.mio_token
}
}
impl rosenpass_wireguard_broker::WireGuardBroker for MockBroker {

View File

@@ -12,6 +12,8 @@ repository = "https://github.com/rosenpass/rosenpass"
[dependencies]
anyhow = { workspace = true }
base64ct = { workspace = true }
serde = { workspace = true }
toml = { workspace = true }
x25519-dalek = { version = "2", features = ["static_secrets"] }
zeroize = { workspace = true }
@@ -20,9 +22,9 @@ rosenpass-ciphers = { workspace = true }
rosenpass-cipher-traits = { workspace = true }
rosenpass-secret-memory = { workspace = true }
rosenpass-util = { workspace = true }
rosenpass-wireguard-broker = {workspace = true}
rosenpass-wireguard-broker = { workspace = true }
tokio = {workspace = true}
tokio = { workspace = true }
[target.'cfg(any(target_os = "linux", target_os = "freebsd"))'.dependencies]
ctrlc-async = "3.2"
@@ -35,8 +37,8 @@ netlink-packet-generic = "0.3"
netlink-packet-wireguard = "0.2"
[dev-dependencies]
tempfile = {workspace = true}
stacker = {workspace = true}
tempfile = { workspace = true }
stacker = { workspace = true }
[features]
experiment_memfd_secret = []

View File

@@ -12,6 +12,9 @@ pub enum Command {
public_keys_dir: PathBuf,
},
Exchange(ExchangeOptions),
ExchangeConfig {
config_file: PathBuf,
},
Help,
}
@@ -19,6 +22,7 @@ enum CommandType {
GenKey,
PubKey,
Exchange,
ExchangeConfig,
}
#[derive(Default)]
@@ -32,9 +36,10 @@ fn fatal<T>(note: &str, command: Option<CommandType>) -> Result<T, String> {
Some(command) => match command {
CommandType::GenKey => Err(format!("{}\nUsage: rp genkey PRIVATE_KEYS_DIR", note)),
CommandType::PubKey => Err(format!("{}\nUsage: rp pubkey PRIVATE_KEYS_DIR PUBLIC_KEYS_DIR", note)),
CommandType::Exchange => Err(format!("{}\nUsage: rp exchange PRIVATE_KEYS_DIR [dev <device>] [listen <ip>:<port>] [peer PUBLIC_KEYS_DIR [endpoint <ip>:<port>] [persistent-keepalive <interval>] [allowed-ips <ip1>/<cidr1>[,<ip2>/<cidr2>]...]]...", note)),
CommandType::Exchange => Err(format!("{}\nUsage: rp exchange PRIVATE_KEYS_DIR [dev <device>] [ip <ip1>/<cidr1>] [listen <ip>:<port>] [peer PUBLIC_KEYS_DIR [endpoint <ip>:<port>] [persistent-keepalive <interval>] [allowed-ips <ip1>/<cidr1>[,<ip2>/<cidr2>]...]]...", note)),
CommandType::ExchangeConfig => Err(format!("{}\nUsage: rp exchange-config <CONFIG_FILE>", note)),
},
None => Err(format!("{}\nUsage: rp [verbose] genkey|pubkey|exchange [ARGS]...", note)),
None => Err(format!("{}\nUsage: rp [verbose] genkey|pubkey|exchange|exchange-config [ARGS]...", note)),
}
}
@@ -144,6 +149,13 @@ impl ExchangeOptions {
return fatal("dev option requires parameter", Some(CommandType::Exchange));
}
}
"ip" => {
if let Some(ip) = args.next() {
options.ip = Some(ip);
} else {
return fatal("is option requires parameter", Some(CommandType::Exchange));
}
}
"listen" => {
if let Some(addr) = args.next() {
if let Ok(addr) = addr.parse::<SocketAddr>() {
@@ -246,6 +258,21 @@ impl Cli {
let options = ExchangeOptions::parse(&mut args)?;
cli.command = Some(Command::Exchange(options));
}
"exchange-config" => {
if cli.command.is_some() {
return fatal("Too many commands supplied", None);
}
if let Some(config_file) = args.next() {
let config_file = PathBuf::from(config_file);
cli.command = Some(Command::ExchangeConfig { config_file });
} else {
return fatal(
"Required position argument: CONFIG_FILE",
Some(CommandType::ExchangeConfig),
);
}
}
"help" => {
cli.command = Some(Command::Help);
}

View File

@@ -1,11 +1,17 @@
use std::{net::SocketAddr, path::PathBuf};
use anyhow::Error;
use serde::Deserialize;
use std::future::Future;
use std::ops::DerefMut;
use std::pin::Pin;
use std::sync::Arc;
use std::{net::SocketAddr, path::PathBuf, process::Command};
use anyhow::Result;
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
use crate::key::WG_B64_LEN;
#[derive(Default)]
#[derive(Default, Deserialize)]
pub struct ExchangePeer {
pub public_keys_dir: PathBuf,
pub endpoint: Option<SocketAddr>,
@@ -13,11 +19,12 @@ pub struct ExchangePeer {
pub allowed_ips: Option<String>,
}
#[derive(Default)]
#[derive(Default, Deserialize)]
pub struct ExchangeOptions {
pub verbose: bool,
pub private_keys_dir: PathBuf,
pub dev: Option<String>,
pub ip: Option<String>,
pub listen: Option<SocketAddr>,
pub peers: Vec<ExchangePeer>,
}
@@ -131,6 +138,27 @@ mod netlink {
}
}
#[derive(Clone)]
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
struct CleanupHandlers(
Arc<::futures::lock::Mutex<Vec<Pin<Box<dyn Future<Output = Result<(), Error>> + Send>>>>>,
);
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
impl CleanupHandlers {
fn new() -> Self {
CleanupHandlers(Arc::new(::futures::lock::Mutex::new(vec![])))
}
async fn enqueue(&self, handler: Pin<Box<dyn Future<Output = Result<(), Error>> + Send>>) {
self.0.lock().await.push(Box::pin(handler))
}
async fn run(self) -> Result<Vec<()>, Error> {
futures::future::try_join_all(self.0.lock().await.deref_mut()).await
}
}
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
pub async fn exchange(options: ExchangeOptions) -> Result<()> {
use std::fs;
@@ -151,15 +179,50 @@ pub async fn exchange(options: ExchangeOptions) -> Result<()> {
let (connection, rtnetlink, _) = rtnetlink::new_connection()?;
tokio::spawn(connection);
let link_name = options.dev.unwrap_or("rosenpass0".to_string());
let link_name = options.dev.clone().unwrap_or("rosenpass0".to_string());
let link_index = netlink::link_create_and_up(&rtnetlink, link_name.clone()).await?;
let cleanup_handlers = CleanupHandlers::new();
let final_cleanup_handlers = (&cleanup_handlers).clone();
cleanup_handlers
.enqueue(Box::pin(async move {
netlink::link_cleanup_standalone(link_index).await
}))
.await;
ctrlc_async::set_async_handler(async move {
netlink::link_cleanup_standalone(link_index)
final_cleanup_handlers
.run()
.await
.expect("Failed to clean up");
})?;
if let Some(ip) = options.ip {
let dev = options.dev.clone().unwrap_or("rosenpass0".to_string());
Command::new("ip")
.arg("address")
.arg("add")
.arg(ip.clone())
.arg("dev")
.arg(dev.clone())
.status()
.expect("failed to configure ip");
cleanup_handlers
.enqueue(Box::pin(async move {
Command::new("ip")
.arg("address")
.arg("del")
.arg(ip)
.arg("dev")
.arg(dev)
.status()
.expect("failed to remove ip");
Ok(())
}))
.await;
}
// Deploy the classic wireguard private key
let (connection, mut genetlink, _) = genetlink::new_connection()?;
tokio::spawn(connection);
@@ -188,8 +251,7 @@ pub async fn exchange(options: ExchangeOptions) -> Result<()> {
let pk = SPk::load(&pqpk)?;
let mut srv = Box::new(AppServer::new(
sk,
pk,
Some((sk, pk)),
if let Some(listen) = options.listen {
vec![listen]
} else {
@@ -255,6 +317,29 @@ pub async fn exchange(options: ExchangeOptions) -> Result<()> {
broker_peer,
peer.endpoint.map(|x| x.to_string()),
)?;
// Configure routes
if let Some(allowed_ips) = peer.allowed_ips {
Command::new("ip")
.arg("route")
.arg("replace")
.arg(allowed_ips.clone())
.arg("dev")
.arg(options.dev.clone().unwrap_or("rosenpass0".to_string()))
.status()
.expect("failed to configure route");
cleanup_handlers
.enqueue(Box::pin(async move {
Command::new("ip")
.arg("route")
.arg("del")
.arg(allowed_ips)
.status()
.expect("failed to remove ip");
Ok(())
}))
.await;
}
}
let out = srv.event_loop();

View File

@@ -1,4 +1,4 @@
use std::process::exit;
use std::{fs, process::exit};
use cli::{Cli, Command};
use exchange::exchange;
@@ -36,6 +36,13 @@ async fn main() {
options.verbose = cli.verbose;
exchange(options).await
}
Command::ExchangeConfig { config_file } => {
let s: String = fs::read_to_string(config_file).expect("cannot read config");
let mut options: exchange::ExchangeOptions =
toml::from_str::<exchange::ExchangeOptions>(&s).expect("cannot parse config");
options.verbose = options.verbose || cli.verbose;
exchange(options).await
}
Command::Help => {
println!("Usage: rp [verbose] genkey|pubkey|exchange [ARGS]...");
Ok(())

View File

@@ -21,6 +21,6 @@ log = { workspace = true }
[dev-dependencies]
allocator-api2-tests = { workspace = true }
tempfile = {workspace = true}
base64ct = {workspace = true}
procspawn = {workspace = true}
tempfile = { workspace = true }
base64ct = { workspace = true }
procspawn = { workspace = true }

2
systemd/rosenpass.target Normal file
View File

@@ -0,0 +1,2 @@
[Unit]
Description=Rosenpass target

View File

@@ -0,0 +1,47 @@
[Unit]
Description=Rosenpass key exchange for %I
Documentation=man:rosenpass(1)
Documentation=https://rosenpass.eu/docs
After=network-online.target nss-lookup.target sys-devices-virtual-net-%i.device
Wants=network-online.target nss-lookup.target
BindsTo=sys-devices-virtual-net-%i.device
PartOf=rosenpass.target
[Service]
ExecStart=rosenpass exchange-config /etc/rosenpass/%i.toml
LoadCredential=pqsk:/etc/rosenpass/%i/pqsk
AmbientCapabilities=CAP_NET_ADMIN
CapabilityBoundingSet=~CAP_AUDIT_CONTROL CAP_AUDIT_READ CAP_AUDIT_WRITE CAP_BLOCK_SUSPEND CAP_BPF CAP_CHOWN CAP_FSETID CAP_SETFCAP CAP_DAC_OVERRIDE CAP_DAC_READ_SEARCH CAP_FOWNER CAP_IPC_OWNER CAP_IPC_LOCK CAP_KILL CAP_LEASE CAP_LINUX_IMMUTABLE CAP_MAC_ADMIN CAP_MAC_OVERRIDE CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_SYS_ADMIN CAP_SYS_BOOT CAP_SYS_CHROOT CAP_SYSLOG CAP_SYS_MODULE CAP_SYS_NICE CAP_SYS_RESOURCE CAP_SYS_PACCT CAP_SYS_PTRACE CAP_SYS_RAWIO CAP_SYS_TIME CAP_SYS_TTY_CONFIG CAP_WAKE_ALARM
DynamicUser=true
LockPersonality=true
MemoryDenyWriteExecute=true
PrivateDevices=true
ProcSubset=pid
ProtectClock=true
ProtectControlGroups=true
ProtectHome=true
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectProc=noaccess
RestrictAddressFamilies=AF_NETLINK AF_INET AF_INET6
RestrictNamespaces=true
RestrictRealtime=true
SystemCallArchitectures=native
SystemCallFilter=~@clock
SystemCallFilter=~@cpu-emulation
SystemCallFilter=~@debug
SystemCallFilter=~@module
SystemCallFilter=~@mount
SystemCallFilter=~@obsolete
SystemCallFilter=~@privileged
SystemCallFilter=~@raw-io
SystemCallFilter=~@reboot
SystemCallFilter=~@swap
UMask=0077
[Install]
WantedBy=multi-user.target

48
systemd/rp@.service Normal file
View File

@@ -0,0 +1,48 @@
[Unit]
Description=Rosenpass key exchange for %I
Documentation=man:rosenpass(1)
Documentation=https://rosenpass.eu/docs
After=network-online.target nss-lookup.target
Wants=network-online.target nss-lookup.target
PartOf=rosenpass.target
[Service]
ExecStart=rp exchange-config /etc/rosenpass/%i.toml
LoadCredential=pqpk:/etc/rosenpass/%i/pqpk
LoadCredential=pqsk:/etc/rosenpass/%i/pqsk
LoadCredential=wgsk:/etc/rosenpass/%i/wgsk
AmbientCapabilities=CAP_NET_ADMIN
CapabilityBoundingSet=~CAP_AUDIT_CONTROL CAP_AUDIT_READ CAP_AUDIT_WRITE CAP_BLOCK_SUSPEND CAP_BPF CAP_CHOWN CAP_FSETID CAP_SETFCAP CAP_DAC_OVERRIDE CAP_DAC_READ_SEARCH CAP_FOWNER CAP_IPC_OWNER CAP_IPC_LOCK CAP_KILL CAP_LEASE CAP_LINUX_IMMUTABLE CAP_MAC_ADMIN CAP_MAC_OVERRIDE CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_SYS_ADMIN CAP_SYS_BOOT CAP_SYS_CHROOT CAP_SYSLOG CAP_SYS_MODULE CAP_SYS_NICE CAP_SYS_RESOURCE CAP_SYS_PACCT CAP_SYS_PTRACE CAP_SYS_RAWIO CAP_SYS_TIME CAP_SYS_TTY_CONFIG CAP_WAKE_ALARM
DynamicUser=true
LockPersonality=true
MemoryDenyWriteExecute=true
PrivateDevices=true
ProcSubset=pid
ProtectClock=true
ProtectControlGroups=true
ProtectHome=true
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectProc=noaccess
RestrictAddressFamilies=AF_NETLINK AF_INET AF_INET6
RestrictNamespaces=true
RestrictRealtime=true
SystemCallArchitectures=native
SystemCallFilter=~@clock
SystemCallFilter=~@cpu-emulation
SystemCallFilter=~@debug
SystemCallFilter=~@module
SystemCallFilter=~@mount
SystemCallFilter=~@obsolete
SystemCallFilter=~@privileged
SystemCallFilter=~@raw-io
SystemCallFilter=~@reboot
SystemCallFilter=~@swap
UMask=0077
[Install]
WantedBy=multi-user.target

183
tests/systemd/rosenpass.nix Normal file
View File

@@ -0,0 +1,183 @@
# This test is largely inspired from:
# https://github.com/NixOS/nixpkgs/blob/master/nixos/tests/rosenpass.nix
# https://github.com/NixOS/nixpkgs/blob/master/nixos/tests/wireguard/basic.nix
{ pkgs, ... }:
let
server = {
ip4 = "192.168.0.1";
ip6 = "fd00::1";
wg = {
ip4 = "10.23.42.1";
ip6 = "fc00::1";
public = "mQufmDFeQQuU/fIaB2hHgluhjjm1ypK4hJr1cW3WqAw=";
secret = "4N5Y1dldqrpsbaEiY8O0XBUGUFf8vkvtBtm8AoOX7Eo=";
listen = 10000;
};
};
client = {
ip4 = "192.168.0.2";
ip6 = "fd00::2";
wg = {
ip4 = "10.23.42.2";
ip6 = "fc00::2";
public = "Mb3GOlT7oS+F3JntVKiaD7SpHxLxNdtEmWz/9FMnRFU=";
secret = "uC5dfGMv7Oxf5UDfdPkj6rZiRZT2dRWp5x8IQxrNcUE=";
};
};
server_config = {
listen = [ "0.0.0.0:9999" ];
public_key = "/etc/rosenpass/rp0/pqpk";
secret_key = "/run/credentials/rosenpass@rp0.service/pqsk";
verbosity = "Verbose";
peers = [{
device = "rp0";
peer = client.wg.public;
public_key = "/etc/rosenpass/rp0/peers/client/pqpk";
}];
};
client_config = {
listen = [ "0.0.0.0:9999" ]; # TODO: Should not be necessary to set, but wouldn't parse.
public_key = "/etc/rosenpass/rp0/pqpk";
secret_key = "/run/credentials/rosenpass@rp0.service/pqsk";
verbosity = "Verbose";
peers = [{
device = "rp0";
peer = server.wg.public;
public_key = "/etc/rosenpass/rp0/peers/server/pqpk";
endpoint = "${server.ip4}:9999";
}];
};
config = pkgs.runCommand "config" { } ''
mkdir -pv $out
cp -v ${(pkgs.formats.toml {}).generate "rp0.toml" server_config} $out/server
cp -v ${(pkgs.formats.toml {}).generate "rp0.toml" client_config} $out/client
'';
in
{
name = "rosenpass unit";
nodes =
let
shared = peer: { config, modulesPath, pkgs, ... }: {
# Need to work around a problem in recent systemd changes.
# It won't be necessary in other distros (for which the systemd file was designed), this is NixOS specific
# https://github.com/NixOS/nixpkgs/issues/258371#issuecomment-1925672767
# This can potentially be removed in future nixpkgs updates
systemd.packages = [
(pkgs.runCommand "rosenpass" { } ''
mkdir -p $out/lib/systemd/system
< ${pkgs.rosenpass}/lib/systemd/system/rosenpass.target > $out/lib/systemd/system/rosenpass.target
< ${pkgs.rosenpass}/lib/systemd/system/rosenpass@.service \
sed 's@^\(\[Service]\)$@\1\nEnvironment=PATH=${pkgs.wireguard-tools}/bin@' |
sed 's@^ExecStartPre=envsubst @ExecStartPre='"${pkgs.envsubst}"'/bin/envsubst @' |
sed 's@^ExecStart=rosenpass @ExecStart='"${pkgs.rosenpass}"'/bin/rosenpass @' > $out/lib/systemd/system/rosenpass@.service
'')
];
networking.wireguard = {
enable = true;
interfaces.rp0 = {
ips = [ "${peer.wg.ip4}/32" "${peer.wg.ip6}/128" ];
privateKeyFile = "/etc/wireguard/wgsk";
};
};
environment.etc."wireguard/wgsk".text = peer.wg.secret;
networking.interfaces.eth1 = {
ipv4.addresses = [{
address = peer.ip4;
prefixLength = 24;
}];
ipv6.addresses = [{
address = peer.ip6;
prefixLength = 64;
}];
};
};
in
{
server = {
imports = [ (shared server) ];
networking.firewall.allowedUDPPorts = [ 9999 server.wg.listen ];
networking.wireguard.interfaces.rp0 = {
listenPort = server.wg.listen;
peers = [
{
allowedIPs = [ client.wg.ip4 client.wg.ip6 ];
publicKey = client.wg.public;
}
];
};
};
client = {
imports = [ (shared client) ];
networking.wireguard.interfaces.rp0 = {
peers = [
{
allowedIPs = [ "10.23.42.0/24" "fc00::/64" ];
publicKey = server.wg.public;
endpoint = "${server.ip4}:${toString server.wg.listen}";
}
];
};
};
};
testScript = { ... }: ''
from os import system
rosenpass = "${pkgs.rosenpass}/bin/rosenpass"
start_all()
for machine in [server, client]:
machine.wait_for_unit("multi-user.target")
machine.wait_for_unit("network-online.target")
with subtest("Key, Config, and Service Setup"):
for name, machine, remote in [("server", server, client), ("client", client, server)]:
# generate all the keys
system(f"{rosenpass} gen-keys --public-key {name}-pqpk --secret-key {name}-pqsk")
# copy private keys to our side
machine.copy_from_host(f"{name}-pqsk", "/etc/rosenpass/rp0/pqsk")
machine.copy_from_host(f"{name}-pqpk", "/etc/rosenpass/rp0/pqpk")
# copy public keys to other side
remote.copy_from_host(f"{name}-pqpk", f"/etc/rosenpass/rp0/peers/{name}/pqpk")
machine.copy_from_host(f"${config}/{name}", "/etc/rosenpass/rp0.toml")
for machine in [server, client]:
machine.wait_for_unit("wireguard-rp0.service")
with subtest("wg network test"):
client.succeed("wg show all preshared-keys | grep none", timeout=5);
client.succeed("ping -c5 ${server.wg.ip4}")
server.succeed("ping -c5 ${client.wg.ip6}")
with subtest("Set up rosenpass"):
for machine in [server, client]:
machine.succeed("systemctl start rosenpass@rp0.service")
for machine in [server, client]:
machine.wait_for_unit("rosenpass@rp0.service")
with subtest("compare preshared keys"):
client.wait_until_succeeds("wg show all preshared-keys | grep --invert-match none", timeout=5);
server.wait_until_succeeds("wg show all preshared-keys | grep --invert-match none", timeout=5);
def get_psk(m):
psk = m.succeed("wg show rp0 preshared-keys | awk '{print $2}'")
psk = psk.strip()
assert len(psk.split()) == 1, "Only one PSK"
return psk
assert get_psk(client) == get_psk(server), "preshared keys need to match"
with subtest("rosenpass network test"):
client.succeed("ping -c5 ${server.wg.ip4}")
server.succeed("ping -c5 ${client.wg.ip6}")
'';
}

139
tests/systemd/rp.nix Normal file
View File

@@ -0,0 +1,139 @@
{ pkgs, ... }:
let
server = {
ip4 = "192.168.0.1";
ip6 = "fd00::1";
wg = {
ip6 = "fc00::1";
listen = 10000;
};
};
client = {
ip4 = "192.168.0.2";
ip6 = "fd00::2";
wg = {
ip6 = "fc00::2";
};
};
server_config = {
listen = "${server.ip4}:9999";
private_keys_dir = "/run/credentials/rp@test-rp-device0.service";
verbose = true;
dev = "test-rp-device0";
ip = "fc00::1/64";
peers = [{
public_keys_dir = "/etc/rosenpass/test-rp-device0/peers/client";
allowed_ips = "fc00::2";
}];
};
client_config = {
private_keys_dir = "/run/credentials/rp@test-rp-device0.service";
verbose = true;
dev = "test-rp-device0";
ip = "fc00::2/128";
peers = [{
public_keys_dir = "/etc/rosenpass/test-rp-device0/peers/server";
endpoint = "${server.ip4}:9999";
allowed_ips = "fc00::/64";
}];
};
config = pkgs.runCommand "config" { } ''
mkdir -pv $out
cp -v ${(pkgs.formats.toml {}).generate "test-rp-device0.toml" server_config} $out/server
cp -v ${(pkgs.formats.toml {}).generate "test-rp-device0.toml" client_config} $out/client
'';
in
{
name = "rp systemd unit";
nodes =
let
shared = peer: { config, modulesPath, pkgs, ... }: {
# Need to work around a problem in recent systemd changes.
# It won't be necessary in other distros (for which the systemd file was designed), this is NixOS specific
# https://github.com/NixOS/nixpkgs/issues/258371#issuecomment-1925672767
# This can potentially be removed in future nixpkgs updates
systemd.packages = [
(pkgs.runCommand "rp@.service" { } ''
mkdir -p $out/lib/systemd/system
< ${pkgs.rosenpass}/lib/systemd/system/rosenpass.target > $out/lib/systemd/system/rosenpass.target
< ${pkgs.rosenpass}/lib/systemd/system/rp@.service \
sed 's@^\(\[Service]\)$@\1\nEnvironment=PATH=${pkgs.iproute2}/bin:${pkgs.wireguard-tools}/bin@' |
sed 's@^ExecStartPre=envsubst @ExecStartPre='"${pkgs.envsubst}"'/bin/envsubst @' |
sed 's@^ExecStart=rp @ExecStart='"${pkgs.rosenpass}"'/bin/rp @' > $out/lib/systemd/system/rp@.service
'')
];
environment.systemPackages = [ pkgs.wireguard-tools ];
networking.interfaces.eth1 = {
ipv4.addresses = [{
address = peer.ip4;
prefixLength = 24;
}];
ipv6.addresses = [{
address = peer.ip6;
prefixLength = 64;
}];
};
};
in
{
server = {
imports = [ (shared server) ];
networking.firewall.allowedUDPPorts = [ 9999 server.wg.listen ];
};
client = {
imports = [ (shared client) ];
};
};
testScript = { ... }: ''
from os import system
rp = "${pkgs.rosenpass}/bin/rp"
start_all()
for machine in [server, client]:
machine.wait_for_unit("multi-user.target")
machine.wait_for_unit("network-online.target")
with subtest("Key, Config, and Service Setup"):
for name, machine, remote in [("server", server, client), ("client", client, server)]:
# create all the keys
system(f"{rp} genkey {name}-sk")
system(f"{rp} pubkey {name}-sk {name}-pk")
# copy secret keys to our side
for file in ["pqpk", "pqsk", "wgsk"]:
machine.copy_from_host(f"{name}-sk/{file}", f"/etc/rosenpass/test-rp-device0/{file}")
# copy public keys to other side
for file in ["pqpk", "wgpk"]:
remote.copy_from_host(f"{name}-pk/{file}", f"/etc/rosenpass/test-rp-device0/peers/{name}/{file}")
machine.copy_from_host(f"${config}/{name}", "/etc/rosenpass/test-rp-device0.toml")
for machine in [server, client]:
machine.succeed("systemctl start rp@test-rp-device0.service")
for machine in [server, client]:
machine.wait_for_unit("rp@test-rp-device0.service")
with subtest("compare preshared keys"):
client.wait_until_succeeds("wg show all preshared-keys | grep --invert-match none", timeout=5);
server.wait_until_succeeds("wg show all preshared-keys | grep --invert-match none", timeout=5);
def get_psk(m):
psk = m.succeed("wg show test-rp-device0 preshared-keys | awk '{print $2}'")
psk = psk.strip()
assert len(psk.split()) == 1, "Only one PSK"
return psk
assert get_psk(client) == get_psk(server), "preshared keys need to match"
with subtest("network test"):
client.succeed("ping -c5 ${server.wg.ip6}")
server.succeed("ping -c5 ${client.wg.ip6}")
'';
}

View File

@@ -1,3 +1,5 @@
#![warn(missing_docs)]
#![recursion_limit = "256"]
#![doc = include_str!(concat!(env!("CARGO_MANIFEST_DIR"), "/README.md"))]
#[cfg(doctest)]

View File

@@ -5,23 +5,70 @@ use crate::CondenseBeside;
pub struct Beside<Val, Ret>(pub Val, pub Ret);
impl<Val, Ret> Beside<Val, Ret> {
/// Get an immutable reference to the destination value
///
/// # Example
/// ```
/// use rosenpass_to::Beside;
///
/// let beside = Beside(1, 2);
/// assert_eq!(beside.dest(), &1);
/// ```
pub fn dest(&self) -> &Val {
&self.0
}
/// Get an immutable reference to the return value
///
/// # Example
/// ```
/// use rosenpass_to::Beside;
///
/// let beside = Beside(1, 2);
/// assert_eq!(beside.ret(), &2);
/// ```
pub fn ret(&self) -> &Ret {
&self.1
}
/// Get a mutable reference to the destination value
///
/// # Example
/// ```
/// use rosenpass_to::Beside;
///
/// let mut beside = Beside(1, 2);
/// *beside.dest_mut() = 3;
/// assert_eq!(beside.dest(), &3);
/// ```
pub fn dest_mut(&mut self) -> &mut Val {
&mut self.0
}
/// Get a mutable reference to the return value
///
/// # Example
/// ```
/// use rosenpass_to::Beside;
///
/// let mut beside = Beside(1, 2);
/// *beside.ret_mut() = 3;
/// assert_eq!(beside.ret(), &3);
/// ```
pub fn ret_mut(&mut self) -> &mut Ret {
&mut self.1
}
/// Perform beside condensation. See [CondenseBeside]
///
/// # Example
/// ```
/// use rosenpass_to::Beside;
/// use rosenpass_to::CondenseBeside;
///
/// let beside = Beside(1, ());
/// assert_eq!(beside.condense(), 1);
/// ```
pub fn condense(self) -> <Ret as CondenseBeside<Val>>::Condensed
where
Ret: CondenseBeside<Val>,

View File

@@ -7,8 +7,10 @@
/// The function [Beside::condense()](crate::Beside::condense) is a shorthand for using the
/// condense trait.
pub trait CondenseBeside<Val> {
/// The type that results from condensation.
type Condensed;
/// Takes ownership of `self` and condenses it with the given value.
fn condense(self, ret: Val) -> Self::Condensed;
}

View File

@@ -1,6 +1,7 @@
/// Helper performing explicit unsized coercion.
/// Used by the [to](crate::to()) function.
pub trait DstCoercion<Dst: ?Sized> {
/// Performs an explicit coercion to the destination type.
fn coerce_dest(&mut self) -> &mut Dst;
}

View File

@@ -1,13 +1,16 @@
use crate::{Beside, CondenseBeside};
use std::borrow::BorrowMut;
// The To trait is the core of the to crate; most functions with destinations will either return
// an object that is an instance of this trait or they will return `-> impl To<Destination,
// Return_value`.
//
// A quick way to implement a function with destination is to use the
// [with_destination(|param: &mut Type| ...)] higher order function.
/// The To trait is the core of the to crate; most functions with destinations will either return
/// an object that is an instance of this trait or they will return `-> impl To<Destination,
/// Return_value`.
///
/// A quick way to implement a function with destination is to use the
/// [with_destination(|param: &mut Type| ...)] higher order function.
pub trait To<Dst: ?Sized, Ret>: Sized {
/// Writes self to the destination `out` and returns a value of type `Ret`.
///
/// This is the core method that must be implemented by all types implementing `To`.
fn to(self, out: &mut Dst) -> Ret;
/// Generate a destination on the fly with a lambda.

View File

@@ -1,20 +1,38 @@
use crate::To;
use std::marker::PhantomData;
/// A struct that wraps a closure and implements the `To` trait
///
/// This allows passing closures that operate on a destination type `Dst`
/// and return `Ret`.
///
/// # Type Parameters
/// * `Dst` - The destination type the closure operates on
/// * `Ret` - The return type of the closure
/// * `Fun` - The closure type that implements `FnOnce(&mut Dst) -> Ret`
struct ToClosure<Dst, Ret, Fun>
where
Dst: ?Sized,
Fun: FnOnce(&mut Dst) -> Ret,
{
/// The function to call.
fun: Fun,
/// Phantom data to hold the destination type
_val: PhantomData<Box<Dst>>,
}
/// Implementation of the `To` trait for ToClosure
///
/// This enables calling the wrapped closure with a destination reference.
impl<Dst, Ret, Fun> To<Dst, Ret> for ToClosure<Dst, Ret, Fun>
where
Dst: ?Sized,
Fun: FnOnce(&mut Dst) -> Ret,
{
/// Execute the wrapped closure with the given destination
///
/// # Arguments
/// * `out` - Mutable reference to the destination
fn to(self, out: &mut Dst) -> Ret {
(self.fun)(out)
}
@@ -22,6 +40,14 @@ where
/// Used to create a function with destination.
///
/// Creates a wrapper that implements the `To` trait for a closure that
/// operates on a destination type.
///
/// # Type Parameters
/// * `Dst` - The destination type the closure operates on
/// * `Ret` - The return type of the closure
/// * `Fun` - The closure type that implements `FnOnce(&mut Dst) -> Ret`
///
/// See the tutorial in [readme.me]..
pub fn with_destination<Dst, Ret, Fun>(fun: Fun) -> impl To<Dst, Ret>
where

View File

@@ -16,8 +16,14 @@ base64ct = { workspace = true }
anyhow = { workspace = true }
typenum = { workspace = true }
static_assertions = { workspace = true }
rustix = {workspace = true}
zeroize = {workspace = true}
rustix = { workspace = true }
zeroize = { workspace = true }
zerocopy = { workspace = true }
thiserror = { workspace = true }
mio = { workspace = true }
tempfile = { workspace = true }
uds = { workspace = true, optional = true, features = ["mio_1xx"] }
[features]
experiment_file_descriptor_passing = ["uds"]

View File

@@ -1,8 +1,13 @@
//! Utilities for working with Base64
use base64ct::{Base64, Decoder as B64Reader, Encoder as B64Writer};
use zeroize::Zeroize;
use std::fmt::Display;
/// Formatter that displays its input as base64.
///
/// Use through [B64Display].
pub struct B64DisplayHelper<'a, const F: usize>(&'a [u8]);
impl<const F: usize> Display for B64DisplayHelper<'_, F> {
@@ -15,7 +20,25 @@ impl<const F: usize> Display for B64DisplayHelper<'_, F> {
}
}
/// Extension trait that can be used to display values as Base64
///
/// # Examples
///
/// ```
/// use rosenpass_util::b64::B64Display;
///
/// let a = vec![0,1,2,3,4,5];
/// assert_eq!(
/// format!("{}", a.fmt_b64::<10>()), // Maximum size of the encoded buffer
/// "AAECAwQF",
/// );
/// ```
pub trait B64Display {
/// Display this value as base64
///
/// # Examples
///
/// See [B64Display].
fn fmt_b64<const F: usize>(&self) -> B64DisplayHelper<F>;
}
@@ -31,6 +54,11 @@ impl<T: AsRef<[u8]>> B64Display for T {
}
}
/// Decode a base64-encoded value
///
/// # Examples
///
/// See [b64_encode].
pub fn b64_decode(input: &[u8], output: &mut [u8]) -> anyhow::Result<()> {
let mut reader = B64Reader::<Base64>::new(input).map_err(|e| anyhow::anyhow!(e))?;
match reader.decode(output) {
@@ -49,6 +77,23 @@ pub fn b64_decode(input: &[u8], output: &mut [u8]) -> anyhow::Result<()> {
}
}
/// Encode a value as base64.
///
/// ```
/// use rosenpass_util::b64::{b64_encode, b64_decode};
///
/// let bytes = b"Hello World";
///
/// let mut encoder_buffer = [0u8; 64];
/// let encoded = b64_encode(bytes, &mut encoder_buffer)?;
///
/// let mut bytes_decoded = [0u8; 11];
/// b64_decode(encoded.as_bytes(), &mut bytes_decoded);
/// assert_eq!(bytes, &bytes_decoded);
///
/// Ok::<(), anyhow::Error>(())
/// ```
///
pub fn b64_encode<'o>(input: &[u8], output: &'o mut [u8]) -> anyhow::Result<&'o str> {
let mut writer = B64Writer::<Base64>::new(output).map_err(|e| anyhow::anyhow!(e))?;
writer.encode(input).map_err(|e| anyhow::anyhow!(e))?;

675
util/src/build.rs Normal file
View File

@@ -0,0 +1,675 @@
//! Lazy construction of values
use crate::{
functional::ApplyExt,
mem::{SwapWithDefaultExt, SwapWithExt},
};
/// Errors returned by [ConstructionSite::erect]
#[derive(thiserror::Error, Debug, Eq, PartialEq)]
pub enum ConstructionSiteErectError<E> {
/// Attempted to erect an empty construction site
#[error("Construction site is void")]
IsVoid,
/// Attempted to erect a construction that is already standing
#[error("Construction is already built")]
AlreadyBuilt,
/// Other error
#[error("Other construction site error {0:?}")]
Other(#[from] E),
}
/// A type that can build some other type
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::Build;
/// use anyhow::{Context, Result};
///
/// #[derive(Eq, PartialEq, Debug)]
/// struct Person {
/// pub fav_pokemon: String,
/// pub fav_number: u8,
/// }
///
/// #[derive(Default, Clone)]
/// struct PersonBuilder {
/// pub fav_pokemon: Option<String>,
/// pub fav_number: Option<u8>,
/// }
///
/// impl Build<Person> for &PersonBuilder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<Person, Self::Error> {
/// let fav_pokemon = self.fav_pokemon.clone().context("Missing fav pokemon")?;
/// let fav_number = self.fav_number.context("Missing fav number")?;
/// Ok(Person {
/// fav_pokemon,
/// fav_number,
/// })
/// }
/// }
///
/// let mut person_builder = PersonBuilder::default();
/// assert!(person_builder.build().is_err());
///
/// person_builder.fav_pokemon = Some("Krabby".to_owned());
/// person_builder.fav_number = Some(0);
/// assert_eq!(
/// person_builder.build().unwrap(),
/// Person {
/// fav_pokemon: "Krabby".to_owned(),
/// fav_number: 0
/// }
/// );
/// ```
pub trait Build<T>: Sized {
/// Error returned by the builder
type Error;
/// Build the type
///
/// # Examples
///
/// See [Self].
fn build(self) -> Result<T, Self::Error>;
}
/// A type that can be incrementally built from a type that can [Build] it
///
/// This is similar to an option, where [Self::Void] is [std::Option::None],
/// [Self::Product] is [std::Option::Some], except that there is a third
/// intermediate state [Self::Builder] that represents a Some/Product value
/// in the process of being made.
///
/// # Examples
///
/// ```
/// use std::borrow::Borrow;
/// use rosenpass_util::build::{ConstructionSite, Build};
/// use anyhow::{Context, Result};
///
/// #[derive(Eq, PartialEq, Debug)]
/// struct Person {
/// pub fav_pokemon: String,
/// pub fav_number: u8,
/// }
///
/// #[derive(Eq, PartialEq, Default, Clone, Debug)]
/// struct PersonBuilder {
/// pub fav_pokemon: Option<String>,
/// pub fav_number: Option<u8>,
/// }
///
/// impl Build<Person> for &PersonBuilder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<Person, Self::Error> {
/// let fav_pokemon = self.fav_pokemon.clone().context("Missing fav pokemon")?;
/// let fav_number = self.fav_number.context("Missing fav number")?;
/// Ok(Person {
/// fav_pokemon,
/// fav_number,
/// })
/// }
/// }
///
/// impl Build<Person> for PersonBuilder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<Person, Self::Error> {
/// self.borrow().build()
/// }
/// }
///
/// // Allocate the construction site
/// let mut site = ConstructionSite::void();
///
/// // Start construction
/// site = ConstructionSite::Builder(PersonBuilder::default());
///
/// // Use the builder to build the value
/// site.builder_mut().unwrap().fav_pokemon = Some("Krabby".to_owned());
/// site.builder_mut().unwrap().fav_number = Some(0);
///
/// // Use `erect` to call Build::build
/// site.erect();
///
/// assert_eq!(
/// site,
/// ConstructionSite::Product(Person {
/// fav_pokemon: "Krabby".to_owned(),
/// fav_number: 0
/// }),
/// );
/// ```
#[derive(Debug, Eq, PartialEq, Clone)]
pub enum ConstructionSite<Builder, T>
where
Builder: Build<T>,
{
/// The site is empty
Void,
/// The site is being built
Builder(Builder),
/// The site has been built and is now finished
Product(T),
}
/// Initializes the construction site as [ConstructionSite::Void]
impl<Builder, T> Default for ConstructionSite<Builder, T>
where
Builder: Build<T>,
{
fn default() -> Self {
Self::Void
}
}
impl<Builder, T> ConstructionSite<Builder, T>
where
Builder: Build<T>,
{
/// Initializes the construction site as [ConstructionSite::Void]
///
/// # Examples
///
/// See [Self].
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// assert_eq!(
/// ConstructionSite::<Builder, House>::void(),
/// ConstructionSite::Void,
/// );
/// ```
pub fn void() -> Self {
Self::Void
}
/// Initialize the construction site from its builder
///
/// # Examples
///
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// assert_eq!(
/// ConstructionSite::<Builder, House>::new(Builder),
/// ConstructionSite::Builder(Builder),
/// );
/// ```
pub fn new(builder: Builder) -> Self {
Self::Builder(builder)
}
/// Initialize the construction site from its product
///
/// # Examples
///
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// assert_eq!(
/// ConstructionSite::<Builder, House>::from_product(House),
/// ConstructionSite::Product(House),
/// );
/// ```
pub fn from_product(value: T) -> Self {
Self::Product(value)
}
/// Extract the construction site and replace it with [Self::Void]
///
/// # Examples
///
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// let mut a = ConstructionSite::<Builder, House>::from_product(House);
/// let a_backup = a.clone();
///
/// let b = a.take();
/// assert_eq!(a, ConstructionSite::void());
/// assert_eq!(b, ConstructionSite::Product(House));
/// ```
pub fn take(&mut self) -> Self {
self.swap_with_default()
}
/// Apply the given function to Self, temporarily converting
/// the mutable reference into an owned value.
///
/// This is useful if you have some function that needs to modify
/// the construction site as an owned value but all you have is a reference.
///
/// # Examples
///
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House(u32);
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder(u32);
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House(self.0))
/// }
/// }
///
/// #[derive(Debug, PartialEq, Eq)]
/// enum FancyMatchState {
/// New,
/// Built,
/// Increment,
/// };
///
/// fn fancy_match(site: &mut ConstructionSite<Builder, House>, def: u32) -> FancyMatchState {
/// site.modify_taken_with_return(|site| {
/// use ConstructionSite as C;
/// use FancyMatchState as F;
/// let (prod, state) = match site {
/// C::Void => (House(def), F::New),
/// C::Builder(b) => (b.build().unwrap(), F::Built),
/// C::Product(House(v)) => (House(v + 1), F::Increment),
/// };
/// let prod = ConstructionSite::from_product(prod);
/// (prod, state)
/// })
/// }
///
/// let mut a = ConstructionSite::void();
/// let r = fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(42)));
/// assert_eq!(r, FancyMatchState::New);
///
/// let mut a = ConstructionSite::new(Builder(13));
/// let r = fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(13)));
/// assert_eq!(r, FancyMatchState::Built);
///
/// let r = fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(14)));
/// assert_eq!(r, FancyMatchState::Increment);
/// ```
pub fn modify_taken_with_return<R, F>(&mut self, f: F) -> R
where
F: FnOnce(Self) -> (Self, R),
{
let (site, res) = self.take().apply(f);
self.swap_with(site);
res
}
/// Apply the given function to Self, temporarily converting
/// the mutable reference into an owned value.
///
/// This is useful if you have some function that needs to modify
/// the construction site as an owned value but all you have is a reference.
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House(u32);
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder(u32);
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House(self.0))
/// }
/// }
///
/// fn fancy_match(site: &mut ConstructionSite<Builder, House>, def: u32) {
/// site.modify_taken(|site| {
/// use ConstructionSite as C;
/// let prod = match site {
/// C::Void => House(def),
/// C::Builder(b) => b.build().unwrap(),
/// C::Product(House(v)) => House(v + 1),
/// };
/// ConstructionSite::from_product(prod)
/// })
/// }
///
/// let mut a = ConstructionSite::void();
/// fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(42)));
///
/// let mut a = ConstructionSite::new(Builder(13));
/// fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(13)));
///
/// fancy_match(&mut a, 42);
/// assert_eq!(a, ConstructionSite::Product(House(14)));
/// ```
pub fn modify_taken<F>(&mut self, f: F)
where
F: FnOnce(Self) -> Self,
{
self.take().apply(f).swap_with_mut(self)
}
/// If this constructions site contains [Self::Builder], call the inner [Build]'s [Build::build]
/// and have the construction site contain a product.
///
/// # Examples
///
/// See [Self].
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build, ConstructionSiteErectError};
/// use std::convert::Infallible;
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = Infallible;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// let mut a = ConstructionSite::<Builder, House>::void();
/// assert_eq!(a.erect(), Err(ConstructionSiteErectError::IsVoid));
/// assert_eq!(a, ConstructionSite::void());
///
/// let mut a = ConstructionSite::<Builder, House>::from_product(House);
/// assert_eq!(a.erect(), Err(ConstructionSiteErectError::AlreadyBuilt));
/// assert_eq!(a, ConstructionSite::from_product(House));
///
/// let mut a = ConstructionSite::<Builder, House>::new(Builder);
/// a.erect().unwrap();
/// assert_eq!(a, ConstructionSite::from_product(House));
/// ```
#[allow(clippy::result_unit_err)]
pub fn erect(&mut self) -> Result<(), ConstructionSiteErectError<Builder::Error>> {
self.modify_taken_with_return(|site| {
let builder = match site {
site @ Self::Void => return (site, Err(ConstructionSiteErectError::IsVoid)),
site @ Self::Product(_) => {
return (site, Err(ConstructionSiteErectError::AlreadyBuilt))
}
Self::Builder(builder) => builder,
};
let product = match builder.build() {
Err(e) => {
return (Self::void(), Err(ConstructionSiteErectError::Other(e)));
}
Ok(p) => p,
};
(Self::from_product(product), Ok(()))
})
}
/// Returns `true` if the construction site is [`Void`].
///
/// [`Void`]: ConstructionSite::Void
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// type Site = ConstructionSite<Builder, House>;
///
/// assert_eq!(Site::Void.is_void(), true);
/// assert_eq!(Site::Builder(Builder).is_void(), false);
/// assert_eq!(Site::Product(House).is_void(), false);
/// ```
#[must_use]
pub fn is_void(&self) -> bool {
matches!(self, Self::Void)
}
/// Returns `true` if the construction site is [`InProgress`].
///
/// [`InProgress`]: ConstructionSite::InProgress
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// type Site = ConstructionSite<Builder, House>;
///
/// assert_eq!(Site::Void.in_progress(), false);
/// assert_eq!(Site::Builder(Builder).in_progress(), true);
/// assert_eq!(Site::Product(House).in_progress(), false);
/// ```
#[must_use]
pub fn in_progress(&self) -> bool {
matches!(self, Self::Builder(..))
}
/// Returns `true` if the construction site is [`Done`].
///
/// [`Done`]: ConstructionSite::Done
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// type Site = ConstructionSite<Builder, House>;
///
/// assert_eq!(Site::Void.is_available(), false);
/// assert_eq!(Site::Builder(Builder).is_available(), false);
/// assert_eq!(Site::Product(House).is_available(), true);
/// ```
#[must_use]
pub fn is_available(&self) -> bool {
matches!(self, Self::Product(..))
}
/// Returns the value of [Self::Builder]
///
/// # Examples
///
/// ```
/// use rosenpass_util::build::{ConstructionSite, Build};
///
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct House;
/// #[derive(Debug, Eq, PartialEq, Clone, Copy)]
/// struct Builder;
///
/// impl Build<House> for Builder {
/// type Error = anyhow::Error;
///
/// fn build(self) -> Result<House, Self::Error> {
/// Ok(House)
/// }
/// }
///
/// type Site = ConstructionSite<Builder, House>;
///
/// assert_eq!(Site::Void.into_builder(), None);
/// assert_eq!(Site::Builder(Builder).into_builder(), Some(Builder));
/// assert_eq!(Site::Product(House).into_builder(), None);
/// ```
pub fn into_builder(self) -> Option<Builder> {
use ConstructionSite as S;
match self {
S::Builder(v) => Some(v),
_ => None,
}
}
/// Returns the value of [Self::Builder] as a reference
///
/// # Examples
///
/// See [Self::into_builder].
pub fn builder_ref(&self) -> Option<&Builder> {
use ConstructionSite as S;
match self {
S::Builder(v) => Some(v),
_ => None,
}
}
/// Returns the value of [Self::Builder] as a mutable reference
///
/// # Examples
///
/// Similar to [Self::into_builder].
pub fn builder_mut(&mut self) -> Option<&mut Builder> {
use ConstructionSite as S;
match self {
S::Builder(v) => Some(v),
_ => None,
}
}
/// Returns the value of [Self::Product]
///
/// # Examples
///
/// Similar to [Self::into_builder].
pub fn into_product(self) -> Option<T> {
use ConstructionSite as S;
match self {
S::Product(v) => Some(v),
_ => None,
}
}
/// Returns the value of [Self::Product] as a reference
///
/// # Examples
///
/// Similar to [Self::into_builder].
pub fn product_ref(&self) -> Option<&T> {
use ConstructionSite as S;
match self {
S::Product(v) => Some(v),
_ => None,
}
}
/// Returns the value of [Self::Product] as a mutable reference
///
/// # Examples
///
/// Similar to [Self::into_builder].
pub fn product_mut(&mut self) -> Option<&mut T> {
use ConstructionSite as S;
match self {
S::Product(v) => Some(v),
_ => None,
}
}
}

156
util/src/controlflow.rs Normal file
View File

@@ -0,0 +1,156 @@
/// A collection of control flow utility macros
#[macro_export]
/// A simple for loop to repeat a $body a number of times
///
/// # Examples
///
/// ```
/// use rosenpass_util::repeat;
/// let mut sum = 0;
/// repeat!(10, {
/// sum += 1;
/// });
/// assert_eq!(sum, 10);
/// ```
macro_rules! repeat {
($times:expr, $body:expr) => {
for _ in 0..($times) {
$body
}
};
}
#[macro_export]
/// Return unless the condition $cond is true, with return value $val, if given.
///
/// # Examples
///
/// ```
/// use rosenpass_util::return_unless;
/// fn test_fn() -> i32 {
/// return_unless!(true, 1);
/// 0
/// }
/// assert_eq!(test_fn(), 0);
/// fn test_fn2() -> i32 {
/// return_unless!(false, 1);
/// 0
/// }
/// assert_eq!(test_fn2(), 1);
/// ```
macro_rules! return_unless {
($cond:expr) => {
if !($cond) {
return;
}
};
($cond:expr, $val:expr) => {
if !($cond) {
return $val;
}
};
}
#[macro_export]
/// Return if the condition $cond is true, with return value $val, if given.
///
/// # Examples
///
/// ```
/// use rosenpass_util::return_if;
/// fn test_fn() -> i32 {
/// return_if!(true, 1);
/// 0
/// }
/// assert_eq!(test_fn(), 1);
/// fn test_fn2() -> i32 {
/// return_if!(false, 1);
/// 0
/// }
/// assert_eq!(test_fn2(), 0);
/// ```
macro_rules! return_if {
($cond:expr) => {
if $cond {
return;
}
};
($cond:expr, $val:expr) => {
if $cond {
return $val;
}
};
}
#[macro_export]
/// Break unless the condition is true, from the loop with label $val, if given.
///
/// # Examples
///
/// ```
/// use rosenpass_util::break_if;
/// let mut sum = 0;
/// for i in 0..10 {
/// break_if!(i == 5);
/// sum += 1;
/// }
/// assert_eq!(sum, 5);
/// let mut sum = 0;
/// 'one: for _ in 0..10 {
/// for j in 0..20 {
/// break_if!(j == 5, 'one);
/// sum += 1;
/// }
/// }
/// assert_eq!(sum, 5);
/// ```
macro_rules! break_if {
($cond:expr) => {
if $cond {
break;
}
};
($cond:expr, $val:tt) => {
if $cond {
break $val;
}
};
}
#[macro_export]
/// Continue if the condition is true, in the loop with label $val, if given.
///
/// # Examples
///
/// ```
/// use rosenpass_util::continue_if;
/// let mut sum = 0;
/// for i in 0..10 {
/// continue_if!(i == 5);
/// sum += 1;
/// }
/// assert_eq!(sum, 9);
/// let mut sum = 0;
/// 'one: for i in 0..10 {
/// continue_if!(i == 5, 'one);
/// sum += 1;
/// }
/// assert_eq!(sum, 9);
/// ```
macro_rules! continue_if {
($cond:expr) => {
if $cond {
continue;
}
};
($cond:expr, $val:tt) => {
if $cond {
continue $val;
}
};
}

View File

@@ -1,21 +1,94 @@
use rustix::{
fd::{AsFd, BorrowedFd, FromRawFd, OwnedFd, RawFd},
io::{fcntl_dupfd_cloexec, DupFlags},
};
//! Utilities for working with file descriptors
use crate::mem::Forgetting;
use anyhow::bail;
use rustix::io::fcntl_dupfd_cloexec;
use std::os::fd::{AsFd, BorrowedFd, FromRawFd, OwnedFd, RawFd};
use crate::{mem::Forgetting, result::OkExt};
/// Prepare a file descriptor for use in Rust code.
///
/// Checks if the file descriptor is valid and duplicates it to a new file descriptor.
/// The old file descriptor is masked to avoid potential use after free (on file descriptor)
/// in case the given file descriptor is still used somewhere
///
/// # Panic and safety
///
/// Will panic if the given file descriptor is negative of or larger than
/// the file descriptor numbers permitted by the operating system.
///
/// # Examples
///
/// ```
/// use std::io::Write;
/// use std::os::fd::{IntoRawFd, AsRawFd};
/// use tempfile::tempdir;
/// use rosenpass_util::fd::{claim_fd, FdIo};
///
/// // Open a file and turn it into a raw file descriptor
/// let orig = tempfile::tempfile()?.into_raw_fd();
///
/// // Reclaim that file and ready it for reading
/// let mut claimed = FdIo(claim_fd(orig)?);
///
/// // A different file descriptor is used
/// assert!(orig.as_raw_fd() != claimed.0.as_raw_fd());
///
/// // Write some data
/// claimed.write_all(b"Hello, World!")?;
///
/// Ok::<(), std::io::Error>(())
/// ```
pub fn claim_fd(fd: RawFd) -> rustix::io::Result<OwnedFd> {
let new = clone_fd_cloexec(unsafe { BorrowedFd::borrow_raw(fd) })?;
mask_fd(fd)?;
Ok(new)
}
/// Prepare a file descriptor for use in Rust code.
///
/// Checks if the file descriptor is valid.
///
/// Unlike [claim_fd], this will try to reuse the same file descriptor identifier instead of masking it.
///
/// # Panic and safety
///
/// Will panic if the given file descriptor is negative of or larger than
/// the file descriptor numbers permitted by the operating system.
///
/// # Examples
///
/// ```
/// use std::io::Write;
/// use std::os::fd::IntoRawFd;
/// use tempfile::tempdir;
/// use rosenpass_util::fd::{claim_fd_inplace, FdIo};
///
/// // Open a file and turn it into a raw file descriptor
/// let fd = tempfile::tempfile()?.into_raw_fd();
///
/// // Reclaim that file and ready it for reading
/// let mut fd = FdIo(claim_fd_inplace(fd)?);
///
/// // Write some data
/// fd.write_all(b"Hello, World!")?;
///
/// Ok::<(), std::io::Error>(())
/// ```
pub fn claim_fd_inplace(fd: RawFd) -> rustix::io::Result<OwnedFd> {
let mut new = unsafe { OwnedFd::from_raw_fd(fd) };
let tmp = clone_fd_cloexec(&new)?;
clone_fd_to_cloexec(tmp, &mut new)?;
Ok(new)
}
/// Will close the given file descriptor and overwrite
/// it with a masking file descriptor (see [open_nullfd]) to prevent accidental reuse.
///
/// # Panic and safety
///
/// Will panic if the given file descriptor is negative of or larger than
/// the file descriptor numbers permitted by the operating system.
pub fn mask_fd(fd: RawFd) -> rustix::io::Result<()> {
// Safety: because the OwnedFd resulting from OwnedFd::from_raw_fd is wrapped in a Forgetting,
// it never gets dropped, meaning that fd is never closed and thus outlives the OwnedFd
@@ -23,18 +96,28 @@ pub fn mask_fd(fd: RawFd) -> rustix::io::Result<()> {
clone_fd_to_cloexec(open_nullfd()?, &mut owned)
}
/// Duplicate a file descriptor, setting the close on exec flag
pub fn clone_fd_cloexec<Fd: AsFd>(fd: Fd) -> rustix::io::Result<OwnedFd> {
const MINFD: RawFd = 3; // Avoid stdin, stdout, and stderr
/// Avoid stdin, stdout, and stderr
const MINFD: RawFd = 3;
fcntl_dupfd_cloexec(fd, MINFD)
}
/// Duplicate a file descriptor, setting the close on exec flag.
///
/// This is slightly different from [clone_fd_cloexec], as this function supports specifying an
/// explicit destination file descriptor.
#[cfg(target_os = "linux")]
pub fn clone_fd_to_cloexec<Fd: AsFd>(fd: Fd, new: &mut OwnedFd) -> rustix::io::Result<()> {
use rustix::io::dup3;
use rustix::io::{dup3, DupFlags};
dup3(fd, new, DupFlags::CLOEXEC)
}
#[cfg(not(target_os = "linux"))]
/// Duplicate a file descriptor, setting the close on exec flag.
///
/// This is slightly different from [clone_fd_cloexec], as this function supports specifying an
/// explicit destination file descriptor.
pub fn clone_fd_to_cloexec<Fd: AsFd>(fd: Fd, new: &mut OwnedFd) -> rustix::io::Result<()> {
use rustix::io::{dup2, fcntl_setfd, FdFlags};
dup2(&fd, new)?;
@@ -42,9 +125,320 @@ pub fn clone_fd_to_cloexec<Fd: AsFd>(fd: Fd, new: &mut OwnedFd) -> rustix::io::R
}
/// Open a "blocked" file descriptor. I.e. a file descriptor that is neither meant for reading nor
/// writing
/// writing.
///
/// # Safety
///
/// The behavior of the file descriptor when being written to or from is undefined.
///
/// # Examples
///
/// ```
/// use std::{fs::File, io::Write, os::fd::IntoRawFd};
/// use rustix::fd::FromRawFd;
/// use rosenpass_util::fd::open_nullfd;
///
/// let nullfd = open_nullfd().unwrap();
/// ```
pub fn open_nullfd() -> rustix::io::Result<OwnedFd> {
use rustix::fs::{open, Mode, OFlags};
// TODO: Add tests showing that this will throw errors on use
open("/dev/null", OFlags::CLOEXEC, Mode::empty())
}
/// Convert low level errors into std::io::Error
///
/// # Examples
///
/// ```
/// use std::io::ErrorKind as EK;
/// use rustix::io::Errno;
/// use rosenpass_util::fd::IntoStdioErr;
///
/// let e = Errno::INTR.into_stdio_err();
/// assert!(matches!(e.kind(), EK::Interrupted));
///
/// let r : rustix::io::Result<()> = Err(Errno::INTR);
/// assert!(matches!(r, Err(e) if e.kind() == EK::Interrupted));
/// ```
pub trait IntoStdioErr {
/// Target type produced (e.g. std::io:Error or std::io::Result depending on context
type Target;
/// Convert low level errors to
fn into_stdio_err(self) -> Self::Target;
}
impl IntoStdioErr for rustix::io::Errno {
type Target = std::io::Error;
fn into_stdio_err(self) -> Self::Target {
std::io::Error::from_raw_os_error(self.raw_os_error())
}
}
impl<T> IntoStdioErr for rustix::io::Result<T> {
type Target = std::io::Result<T>;
fn into_stdio_err(self) -> Self::Target {
self.map_err(IntoStdioErr::into_stdio_err)
}
}
/// Read and write directly from a file descriptor
///
/// # Examples
///
/// See [claim_fd].
pub struct FdIo<Fd: AsFd>(pub Fd);
impl<Fd: AsFd> std::io::Read for FdIo<Fd> {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
rustix::io::read(&self.0, buf).into_stdio_err()
}
}
impl<Fd: AsFd> std::io::Write for FdIo<Fd> {
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
rustix::io::write(&self.0, buf).into_stdio_err()
}
fn flush(&mut self) -> std::io::Result<()> {
Ok(())
}
}
/// Helpers for accessing stat(2) information
pub trait StatExt {
/// Check if the file is a socket
///
/// # Examples
///
/// ```
/// use rosenpass_util::fd::StatExt;
/// assert!(rustix::fs::stat("/")?.is_socket() == false);
/// Ok::<(), rustix::io::Errno>(())
/// ````
fn is_socket(&self) -> bool;
}
impl StatExt for rustix::fs::Stat {
fn is_socket(&self) -> bool {
use rustix::fs::FileType;
let ft = FileType::from_raw_mode(self.st_mode);
matches!(ft, FileType::Socket)
}
}
/// Helpers for accessing stat(2) information on an open file descriptor
pub trait TryStatExt {
/// Error type returned by operations
type Error;
/// Check if the file is a socket
///
/// # Examples
///
/// ```
/// use rosenpass_util::fd::TryStatExt;
/// let fd = rustix::fs::open("/", rustix::fs::OFlags::empty(), rustix::fs::Mode::empty())?;
/// assert!(matches!(fd.is_socket(), Ok(false)));
/// Ok::<(), rustix::io::Errno>(())
/// ````
fn is_socket(&self) -> Result<bool, Self::Error>;
}
impl<T> TryStatExt for T
where
T: AsFd,
{
type Error = rustix::io::Errno;
fn is_socket(&self) -> Result<bool, Self::Error> {
rustix::fs::fstat(self)?.is_socket().ok()
}
}
/// Determine the type of socket a file descriptor represents
pub trait GetSocketType {
/// Error type returned by operations in this trait
type Error;
/// Look up the socket; see [rustix::net::sockopt::get_socket_type]
fn socket_type(&self) -> Result<rustix::net::SocketType, Self::Error>;
/// Checks if the socket is a datagram socket
fn is_datagram_socket(&self) -> Result<bool, Self::Error> {
use rustix::net::SocketType;
matches!(self.socket_type()?, SocketType::DGRAM).ok()
}
/// Checks if the socket is a stream socket
fn is_stream_socket(&self) -> Result<bool, Self::Error> {
Ok(self.socket_type()? == rustix::net::SocketType::STREAM)
}
}
impl<T> GetSocketType for T
where
T: AsFd,
{
type Error = rustix::io::Errno;
fn socket_type(&self) -> Result<rustix::net::SocketType, Self::Error> {
rustix::net::sockopt::get_socket_type(self)
}
}
/// Distinguish different socket address familys; e.g. IP and unix sockets
#[cfg(target_os = "linux")]
pub trait GetSocketDomain {
/// Error type returned by operations in this trait
type Error;
/// Retrieve the socket domain (address family)
fn socket_domain(&self) -> Result<rustix::net::AddressFamily, Self::Error>;
/// Alias for [socket_domain]
fn socket_address_family(&self) -> Result<rustix::net::AddressFamily, Self::Error> {
self.socket_domain()
}
/// Check if the underlying socket is a unix domain socket
fn is_unix_socket(&self) -> Result<bool, Self::Error> {
Ok(self.socket_domain()? == rustix::net::AddressFamily::UNIX)
}
}
#[cfg(target_os = "linux")]
impl<T> GetSocketDomain for T
where
T: AsFd,
{
type Error = rustix::io::Errno;
fn socket_domain(&self) -> Result<rustix::net::AddressFamily, Self::Error> {
rustix::net::sockopt::get_socket_domain(self)
}
}
/// Distinguish different types of unix sockets
#[cfg(target_os = "linux")]
pub trait GetUnixSocketType {
/// Error type returned by operations in this trait
type Error;
/// Check if the socket is a unix stream socket
fn is_unix_stream_socket(&self) -> Result<bool, Self::Error>;
/// Returns Ok(()) only if the underlying socket is a unix stream socket
fn demand_unix_stream_socket(&self) -> anyhow::Result<()>;
}
#[cfg(target_os = "linux")]
impl<T> GetUnixSocketType for T
where
T: GetSocketType + GetSocketDomain<Error = <T as GetSocketType>::Error>,
anyhow::Error: From<<T as GetSocketType>::Error>,
{
type Error = <T as GetSocketType>::Error;
fn is_unix_stream_socket(&self) -> Result<bool, Self::Error> {
Ok(self.is_unix_socket()? && self.is_stream_socket()?)
}
fn demand_unix_stream_socket(&self) -> anyhow::Result<()> {
use rustix::net::AddressFamily as SA;
use rustix::net::SocketType as ST;
match (self.socket_domain()?, self.socket_type()?) {
(SA::UNIX, ST::STREAM) => Ok(()),
(SA::UNIX, mode) => bail!("Expected unix socket in stream mode, but mode is {mode:?}"),
(domain, _) => bail!("Expected unix socket, but socket domain is {domain:?} instead"),
}
}
}
#[cfg(target_os = "linux")]
/// Distinguish between different network socket protocols (e.g. tcp, udp)
pub trait GetSocketProtocol {
/// Retrieve the socket protocol
fn socket_protocol(&self) -> Result<Option<rustix::net::Protocol>, rustix::io::Errno>;
/// Check if the socket is a udp socket
fn is_udp_socket(&self) -> Result<bool, rustix::io::Errno> {
self.socket_protocol()?
.map(|p| p == rustix::net::ipproto::UDP)
.unwrap_or(false)
.ok()
}
/// Return Ok(()) only if the socket is a udp socket
fn demand_udp_socket(&self) -> anyhow::Result<()> {
match self.socket_protocol() {
Ok(Some(rustix::net::ipproto::UDP)) => Ok(()),
Ok(Some(other_proto)) => {
bail!("Not a udp socket, instead socket protocol is: {other_proto:?}")
}
Ok(None) => bail!("getsockopt() returned empty value"),
Err(errno) => Err(errno.into_stdio_err())?,
}
}
}
#[cfg(target_os = "linux")]
impl<T> GetSocketProtocol for T
where
T: AsFd,
{
fn socket_protocol(&self) -> Result<Option<rustix::net::Protocol>, rustix::io::Errno> {
rustix::net::sockopt::get_socket_protocol(self)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::io::{Read, Write};
#[test]
#[should_panic(expected = "fd != u32::MAX as RawFd")]
fn test_claim_fd_invalid_neg() {
let _ = claim_fd(-1);
}
#[test]
#[should_panic(expected = "fd != u32::MAX as RawFd")]
fn test_claim_fd_invalid_max() {
let _ = claim_fd(i64::MAX as RawFd);
}
#[test]
#[should_panic]
fn test_claim_fd_inplace_invalid_neg() {
let _ = claim_fd_inplace(-1);
}
#[test]
#[should_panic]
fn test_claim_fd_inplace_invalid_max() {
let _ = claim_fd_inplace(i64::MAX as RawFd);
}
#[test]
#[should_panic]
fn test_mask_fd_invalid_neg() {
let _ = mask_fd(-1);
}
#[test]
#[should_panic]
fn test_mask_fd_invalid_max() {
let _ = mask_fd(i64::MAX as RawFd);
}
#[test]
fn test_open_nullfd() -> anyhow::Result<()> {
let mut file = FdIo(open_nullfd()?);
let mut buf = [0; 10];
assert!(matches!(file.read(&mut buf), Ok(0) | Err(_)));
assert!(matches!(file.write(&buf), Err(_)));
Ok(())
}
#[test]
fn test_nullfd_read_write() {
let nullfd = open_nullfd().unwrap();
let mut buf = vec![0u8; 16];
assert_eq!(rustix::io::read(&nullfd, &mut buf).unwrap(), 0);
assert!(rustix::io::write(&nullfd, b"test").is_err());
}
}

View File

@@ -1,15 +1,45 @@
//! Helpers for working with files
use anyhow::ensure;
use std::fs::File;
use std::io::Read;
use std::os::unix::fs::OpenOptionsExt;
use std::{fs::OpenOptions, path::Path};
/// Level of secrecy applied for a file
pub enum Visibility {
/// The file might contain a public key
Public,
/// The file might contain a secret key
Secret,
}
/// Open a file writable
/// Open a file writeably, truncating the file.
///
/// Sensible default permissions are chosen based on the value of `visibility`
///
/// # Examples
///
/// ```
/// use std::io::{Write, Read};
/// use tempfile::tempdir;
/// use rosenpass_util::file::{fopen_r, fopen_w, Visibility};
///
/// const CONTENTS : &[u8] = b"Hello World";
///
/// let dir = tempdir()?;
/// let path = dir.path().join("secret_key");
///
/// let mut f = fopen_w(&path, Visibility::Secret)?;
/// f.write_all(CONTENTS)?;
///
/// let mut f = fopen_r(&path)?;
/// let mut b = Vec::new();
/// f.read_to_end(&mut b)?;
/// assert_eq!(CONTENTS, &b);
///
/// Ok::<(), std::io::Error>(())
/// ```
pub fn fopen_w<P: AsRef<Path>>(path: P, visibility: Visibility) -> std::io::Result<File> {
let mut options = OpenOptions::new();
options.create(true).write(true).read(false).truncate(true);
@@ -19,7 +49,12 @@ pub fn fopen_w<P: AsRef<Path>>(path: P, visibility: Visibility) -> std::io::Resu
};
options.open(path)
}
/// Open a file readable
/// Open a file readably
///
/// # Examples
///
/// See [fopen_w].
pub fn fopen_r<P: AsRef<Path>>(path: P) -> std::io::Result<File> {
OpenOptions::new()
.read(true)
@@ -29,9 +64,47 @@ pub fn fopen_r<P: AsRef<Path>>(path: P) -> std::io::Result<File> {
.open(path)
}
/// Extension trait for [std::io::Read] adding [read_slice_to_end]
pub trait ReadSliceToEnd {
/// Error type returned by functions in this trait
type Error;
/// Read slice asserting that the length of the data to read is at most
/// as long as the buffer to read into
///
/// Note that this *may* append data read to [buf] even if the function fails,
/// so the caller should make no assumptions about the contents of the buffer
/// after calling read_slice_to_end if the result is an error.
///
/// # Examples
///
/// ```
/// use rosenpass_util::file::ReadSliceToEnd;
///
/// const DATA : &[u8] = b"Hello World";
///
/// // It is OK if file and buffer are equally long
/// let mut buf = vec![b' '; 11];
/// let res = Clone::clone(&DATA).read_slice_to_end(&mut buf[..DATA.len()]);
/// assert!(res.is_ok()); // Read is overlong
/// assert_eq!(buf, DATA); // Finally, data was successfully read
///
/// // It is OK if the buffer is longer than the file
/// let mut buf = vec![b' '; 16];
/// let res = Clone::clone(&DATA).read_slice_to_end(&mut buf);
/// assert!(matches!(res, Ok(11)));
/// assert_eq!(buf, b"Hello World "); // Data was still read to the buffer!
///
/// // It is not OK if the buffer is shorter than the file
/// let mut buf = vec![b' '; 5];
/// let res = Clone::clone(&DATA).read_slice_to_end(&mut buf);
/// assert!(res.is_err());
///
/// // THE BUFFER MAY STILL BE FILLED THOUGH, BUT THIS IS NOT GUARANTEED
/// assert_eq!(buf, b"Hello"); // Data was still read to the buffer!
///
/// Ok::<(), std::io::Error>(())
/// ```
fn read_slice_to_end(&mut self, buf: &mut [u8]) -> Result<usize, Self::Error>;
}
@@ -53,9 +126,50 @@ impl<R: Read> ReadSliceToEnd for R {
}
}
/// Extension trait for [std::io::Read] adding [read_exact_to_end]
pub trait ReadExactToEnd {
/// Error type returned by functions in this trait
type Error;
/// Read slice asserting that the length of the data to be read
/// and the buffer are exactly the same length.
///
/// Note that this *may* append data read to [buf] even if the function fails,
/// so the caller should make no assumptions about the contents of the buffer
/// after calling read_exact_to_end if the result is an error.
///
/// # Examples
///
/// ```
/// use rosenpass_util::file::ReadExactToEnd;
///
/// const DATA : &[u8] = b"Hello World";
///
/// // It is OK if file and buffer are equally long
/// let mut buf = vec![b' '; 11];
/// let res = Clone::clone(&DATA).read_exact_to_end(&mut buf[..DATA.len()]);
/// assert!(res.is_ok()); // Read is overlong
/// assert_eq!(buf, DATA); // Finally, data was successfully read
///
/// // It is not OK if the buffer is longer than the file
/// let mut buf = vec![b' '; 16];
/// let res = Clone::clone(&DATA).read_exact_to_end(&mut buf);
/// assert!(res.is_err());
///
/// // THE BUFFER MAY STILL BE FILLED THOUGH, BUT THIS IS NOT GUARANTEED
/// // The read implementation for &[u8] happens not to do this
/// assert_eq!(buf, b" "); // Data was still read to the buffer!
///
/// // It is not OK if the buffer is shorter than the file
/// let mut buf = vec![b' '; 5];
/// let res = Clone::clone(&DATA).read_exact_to_end(&mut buf);
/// assert!(res.is_err());
///
/// // THE BUFFER MAY STILL BE FILLED THOUGH, BUT THIS IS NOT GUARANTEED
/// assert_eq!(buf, b"Hello"); // Data was still read to the buffer!
///
/// Ok::<(), std::io::Error>(())
/// ```
fn read_exact_to_end(&mut self, buf: &mut [u8]) -> Result<(), Self::Error>;
}
@@ -70,47 +184,279 @@ impl<R: Read> ReadExactToEnd for R {
}
}
/// Load a value from a file
pub trait LoadValue {
/// Error type returned
type Error;
/// Load a value from a file
///
/// # Examples
///
/// ```
/// use std::path::Path;
/// use std::io::Write;
/// use tempfile::tempdir;
/// use rosenpass_util::file::{fopen_r, fopen_w, LoadValue, ReadExactToEnd, StoreValue, Visibility};
///
/// #[derive(Debug, PartialEq, Eq)]
/// struct MyInt(pub u32);
///
/// impl StoreValue for MyInt {
/// type Error = std::io::Error;
///
/// fn store<P: AsRef<Path>>(&self, path: P) -> Result<(), Self::Error> {
/// let mut f = fopen_w(path, Visibility::Public)?;
/// f.write_all(&self.0.to_le_bytes())
/// }
/// }
///
/// impl LoadValue for MyInt {
/// type Error = anyhow::Error;
///
/// fn load<P: AsRef<Path>>(path: P) -> Result<Self, Self::Error>
/// where
/// Self: Sized,
/// {
/// let mut b = [0u8; 4];
/// fopen_r(path)?.read_exact_to_end(&mut b)?;
/// Ok(MyInt(u32::from_le_bytes(b)))
/// }
/// }
///
/// let dir = tempdir()?;
/// let path = dir.path().join("my_int");
///
/// let orig = MyInt(17);
/// orig.store(&path)?;
///
/// let copy = MyInt::load(&path)?;
/// assert_eq!(orig, copy);
///
/// Ok::<(), anyhow::Error>(())
/// ```
fn load<P: AsRef<Path>>(path: P) -> Result<Self, Self::Error>
where
Self: Sized;
}
/// Load a value from a file encoded as base64
pub trait LoadValueB64 {
/// Error type returned
type Error;
/// Load a value from a file encoded as base64
///
/// # Examples
///
/// ```
/// use std::path::Path;
/// use tempfile::tempdir;
/// use rosenpass_util::b64::{b64_decode, b64_encode};
/// use rosenpass_util::file::{
/// fopen_r, fopen_w, LoadValueB64, ReadSliceToEnd, StoreValueB64, StoreValueB64Writer,
/// Visibility,
/// };
///
/// #[derive(Debug, PartialEq, Eq)]
/// struct MyInt(pub u32);
///
/// impl StoreValueB64Writer for MyInt {
/// type Error = anyhow::Error;
///
/// fn store_b64_writer<const F: usize, W: std::io::Write>(
/// &self,
/// mut writer: W,
/// ) -> Result<(), Self::Error> {
/// // Let me just point out while writing this example,
/// // that this API is currently, entirely shit in terms of
/// // how it deals with buffer lengths.
/// let mut buf = [0u8; F];
/// let b64 = b64_encode(&self.0.to_le_bytes(), &mut buf)?;
/// writer.write_all(b64.as_bytes())?;
/// Ok(())
/// }
/// }
///
/// impl StoreValueB64 for MyInt {
/// type Error = anyhow::Error;
///
/// fn store_b64<const F: usize, P: AsRef<Path>>(&self, path: P) -> Result<(), Self::Error>
/// where
/// Self: Sized,
/// {
/// // The buffer length (first generic arg) is kind of an upper bound
/// self.store_b64_writer::<F, _>(fopen_w(path, Visibility::Public)?)
/// }
/// }
///
/// impl LoadValueB64 for MyInt {
/// type Error = anyhow::Error;
///
/// fn load_b64<const F: usize, P: AsRef<Path>>(path: P) -> Result<Self, Self::Error>
/// where
/// Self: Sized,
/// {
/// // The buffer length is kind of an upper bound
/// let mut b64_buf = [0u8; F];
/// let b64_len = fopen_r(path)?.read_slice_to_end(&mut b64_buf)?;
/// let b64_dat = &b64_buf[..b64_len];
///
/// let mut buf = [0u8; 4];
/// b64_decode(b64_dat, &mut buf)?;
/// Ok(MyInt(u32::from_le_bytes(buf)))
/// }
/// }
///
/// let dir = tempdir()?;
/// let path = dir.path().join("my_int");
///
/// let orig = MyInt(17);
/// orig.store_b64::<10, _>(&path)?;
///
/// let copy = MyInt::load_b64::<10, _>(&path)?;
/// assert_eq!(orig, copy);
///
/// Ok::<(), anyhow::Error>(())
/// ```
fn load_b64<const F: usize, P: AsRef<Path>>(path: P) -> Result<Self, Self::Error>
where
Self: Sized;
}
/// Store a value encoded as base64 in a file.
pub trait StoreValueB64 {
/// Error type returned
type Error;
/// Store a value encoded as base64 in a file.
///
/// # Examples
///
/// See [LoadValueB64::load_b64].
fn store_b64<const F: usize, P: AsRef<Path>>(&self, path: P) -> Result<(), Self::Error>
where
Self: Sized;
}
/// Store a value encoded as base64 to a writable stream
pub trait StoreValueB64Writer {
/// Error type returned
type Error;
/// Store a value encoded as base64 to a writable stream
///
/// # Examples
///
/// See [LoadValueB64::load_b64].
fn store_b64_writer<const F: usize, W: std::io::Write>(
&self,
writer: W,
) -> Result<(), Self::Error>;
}
/// Store a value in a file
pub trait StoreValue {
/// Error type returned
type Error;
/// Store a value in a file
///
/// # Examples
///
/// See [LoadValue::load].
fn store<P: AsRef<Path>>(&self, path: P) -> Result<(), Self::Error>;
}
pub trait DisplayValueB64 {
type Error;
#[cfg(test)]
mod tests {
use super::*;
use std::fs::File;
use std::io::Write;
use std::os::unix::fs::PermissionsExt;
use tempfile::tempdir;
fn display_b64<'o>(&self, output: &'o mut [u8]) -> Result<&'o str, Self::Error>;
#[test]
fn test_fopen_w_public() {
let tmp_dir = tempdir().unwrap();
let path = tmp_dir.path().join("test");
let mut file = fopen_w(path, Visibility::Public).unwrap();
file.write_all(b"test").unwrap();
let metadata = file.metadata().unwrap();
let permissions = metadata.permissions();
assert_eq!(permissions.mode(), 0o100644);
}
#[test]
fn test_fopen_w_secret() {
let tmp_dir = tempdir().unwrap();
let path = tmp_dir.path().join("test");
let mut file = fopen_w(path, Visibility::Secret).unwrap();
file.write_all(b"test").unwrap();
let metadata = file.metadata().unwrap();
let permissions = metadata.permissions();
assert_eq!(permissions.mode(), 0o100600);
}
#[test]
fn test_fopen_r() {
let tmp_dir = tempdir().unwrap();
let path = tmp_dir.path().join("test");
let mut file = File::create(path.clone()).unwrap();
file.write_all(b"test").unwrap();
let mut contents = String::new();
let mut file = fopen_r(path).unwrap();
file.read_to_string(&mut contents).unwrap();
assert_eq!(contents, "test");
}
#[test]
fn test_read_slice_to_end() {
let tmp_dir = tempdir().unwrap();
let path = tmp_dir.path().join("test");
let mut file = File::create(path.clone()).unwrap();
file.write_all(b"test").unwrap();
let mut buf = [0u8; 4];
let mut file = fopen_r(path).unwrap();
file.read_slice_to_end(&mut buf).unwrap();
assert_eq!(buf, [116, 101, 115, 116]);
}
#[test]
fn test_read_exact_to_end() {
let tmp_dir = tempdir().unwrap();
let path = tmp_dir.path().join("test");
let mut file = File::create(path.clone()).unwrap();
file.write_all(b"test").unwrap();
let mut buf = [0u8; 4];
let mut file = fopen_r(path).unwrap();
file.read_exact_to_end(&mut buf).unwrap();
assert_eq!(buf, [116, 101, 115, 116]);
}
#[test]
fn test_read_exact_to_end_to_long() {
let tmp_dir = tempdir().unwrap();
let path = tmp_dir.path().join("test");
let mut file = File::create(path.clone()).unwrap();
file.write_all(b"test").unwrap();
let mut buf = [0u8; 3];
let mut file = fopen_r(path).unwrap();
let result = file.read_exact_to_end(&mut buf);
assert!(result.is_err());
assert_eq!(result.unwrap_err().to_string(), "File too long!");
}
#[test]
fn test_read_slice_to_end_to_long() {
let tmp_dir = tempdir().unwrap();
let path = tmp_dir.path().join("test");
let mut file = File::create(path.clone()).unwrap();
file.write_all(b"test").unwrap();
let mut buf = [0u8; 3];
let mut file = fopen_r(path).unwrap();
let result = file.read_slice_to_end(&mut buf);
assert!(result.is_err());
assert_eq!(result.unwrap_err().to_string(), "File too long!");
}
}

View File

@@ -1,19 +1,270 @@
pub fn mutating<T, F>(mut v: T, f: F) -> T
//! Syntax sugar & helpers for a functional programming style and method chains
/// Mutate a value; mostly syntactic sugar
///
/// # Examples
///
/// ```
/// use std::borrow::Borrow;
/// use rosenpass_util::functional::{mutating, MutatingExt, sideeffect, SideffectExt, ApplyExt};
/// use rosenpass_util::mem::DiscardResultExt;
///
/// // Say you have a function that takes a mutable reference
/// fn replace<T: Copy + Eq>(slice: &mut [T], targ: T, by: T) {
/// for val in slice.iter_mut() {
/// if *val == targ {
/// *val = by;
/// }
/// }
/// }
///
/// // Or you have some action that you want to perform as a side effect
/// fn count<T: Copy + Eq>(accumulator: &mut usize, slice: &[T], targ: T) {
/// *accumulator += slice.iter()
/// .filter(|e| *e == &targ)
/// .count();
/// }
///
/// // Lets say, you also have a function that actually modifies the value
/// fn rot2<const N : usize>(slice: [u8; N]) -> [u8; N] {
/// let it = slice.iter()
/// .cycle()
/// .skip(2)
/// .take(N);
///
/// let mut ret = [0u8; N];
/// for (no, elm) in it.enumerate() {
/// ret[no] = *elm;
/// }
///
/// ret
/// }
///
/// // Then these function are kind of clunky to use in an expression;
/// // it can be done, but the resulting code is a bit verbose
/// let mut accu = 0;
/// assert_eq!(b"llo_WorldHe", &{
/// let mut buf = b"Hello World".to_owned();
/// count(&mut accu, &buf, b'l');
/// replace(&mut buf, b' ', b'_');
/// rot2(buf)
/// });
/// assert_eq!(accu, 3);
///
/// // Instead you could use mutating for a slightly prettier syntax,
/// // but this makes only sense if you want to apply a single action
/// assert_eq!(b"Hello_World",
/// &mutating(b"Hello World".to_owned(), |buf|
/// replace(buf, b' ', b'_')));
///
/// // The same is the case for sideeffect()
/// assert_eq!(b"Hello World",
/// &sideeffect(b"Hello World".to_owned(), |buf|
/// count(&mut accu, buf, b'l')));
/// assert_eq!(accu, 6);
///
/// // Calling rot2 on its own is straightforward of course
/// assert_eq!(b"llo WorldHe", &rot2(b"Hello World".to_owned()));
///
/// // These operations can be conveniently used in a method chain
/// // by using the extension traits.
/// //
/// // This is also quite handy if you just need to
/// // modify a value in a long method chain.
/// //
/// // Here apply() also comes in quite handy, because we can use it
/// // to modify the value itself (turning it into a reference).
/// assert_eq!(b"llo_WorldHe",
/// b"Hello World"
/// .to_owned()
/// .sideeffect(|buf| count(&mut accu, buf, b'l'))
/// .mutating(|buf| replace(buf, b' ', b'_'))
/// .apply(rot2)
/// .borrow() as &[u8]);
/// assert_eq!(accu, 9);
///
/// // There is also the mutating_mut variant, which can operate on any mutable reference;
/// // this is mainly useful in a method chain if you are dealing with a mutable reference.
/// //
/// // This example is quite artificial though.
/// assert_eq!(b"llo_WorldHe",
/// b"hello world"
/// .to_owned()
/// .mutating(|buf|
/// // Can not use sideeffect_ref at the start, because it drops the mut reference
/// // status
/// buf.sideeffect_mut(|buf| count(&mut accu, buf, b'l'))
/// .mutating_mut(|buf| replace(buf, b' ', b'_'))
/// .mutating_mut(|buf| replace(buf, b'h', b'H'))
/// .mutating_mut(|buf| replace(buf, b'w', b'W'))
/// // Using rot2 is more complex now
/// .mutating_mut(|buf| {
/// *buf = rot2(*buf);
/// })
/// // Can use sideeffect_ref at the end, because we no longer need
/// // the &mut reference
/// .sideeffect_ref(|buf| count(&mut accu, *buf, b'l'))
/// // And we can use apply to fix the return value if we really want to go
/// // crazy and avoid using a {} block
/// .apply(|_| ())
/// // [crate::mem::DiscardResult::discard_result] does the same job and it is more explicit.
/// .discard_result())
/// .borrow() as &[u8]);
/// assert_eq!(accu, 15);
/// ```
pub fn mutating<T, F>(mut v: T, mut f: F) -> T
where
F: Fn(&mut T),
F: FnMut(&mut T),
{
f(&mut v);
v
}
pub fn sideeffect<T, F>(v: T, f: F) -> T
/// Mutating values on the fly in a method chain
pub trait MutatingExt {
/// Mutating values on the fly in a method chain (owning)
///
/// # Examples
///
/// See [mutating].
fn mutating<F>(self, f: F) -> Self
where
F: FnMut(&mut Self);
/// Mutating values on the fly in a method chain (non-owning)
///
/// # Examples
///
/// See [mutating].
fn mutating_mut<F>(&mut self, f: F) -> &mut Self
where
F: FnMut(&mut Self);
}
impl<T> MutatingExt for T {
fn mutating<F>(self, f: F) -> Self
where
F: FnMut(&mut Self),
{
mutating(self, f)
}
fn mutating_mut<F>(&mut self, mut f: F) -> &mut Self
where
F: FnMut(&mut Self),
{
f(self);
self
}
}
/// Apply a sideeffect using some value in an expression
///
/// # Examples
///
/// See [mutating].
pub fn sideeffect<T, F>(v: T, mut f: F) -> T
where
F: Fn(&T),
F: FnMut(&T),
{
f(&v);
v
}
/// Apply sideeffect on the fly in a method chain
pub trait SideffectExt {
/// Apply sideeffect on the fly in a method chain (owning)
///
/// # Examples
///
/// See [mutating].
fn sideeffect<F>(self, f: F) -> Self
where
F: FnMut(&Self);
/// Apply sideeffect on the fly in a method chain (immutable ref)
///
/// # Examples
///
/// See [mutating].
fn sideeffect_ref<F>(&self, f: F) -> &Self
where
F: FnMut(&Self);
/// Apply sideeffect on the fly in a method chain (mutable ref)
///
/// # Examples
///
/// See [mutating].
fn sideeffect_mut<F>(&mut self, f: F) -> &mut Self
where
F: FnMut(&Self);
}
impl<T> SideffectExt for T {
fn sideeffect<F>(self, f: F) -> Self
where
F: FnMut(&Self),
{
sideeffect(self, f)
}
fn sideeffect_ref<F>(&self, mut f: F) -> &Self
where
F: FnMut(&Self),
{
f(self);
self
}
fn sideeffect_mut<F>(&mut self, mut f: F) -> &mut Self
where
F: FnMut(&Self),
{
f(self);
self
}
}
/// Just run the function
///
/// This is occasionally useful; in particular, you can
/// use it to control the meaning of the question mark operator.
///
/// # Examples
///
/// ```
/// use rosenpass_util::functional::run;
///
/// fn add_and_mul(a: Option<u32>, b: Option<u32>, c: anyhow::Result<u32>, d: anyhow::Result<u32>) -> u32 {
/// run(|| -> anyhow::Result<u32> {
/// let ab = run(|| Some(a? * b?)).unwrap_or(0);
/// Ok(ab + c? + d?)
/// }).unwrap()
/// }
///
/// assert_eq!(98, add_and_mul(Some(10), Some(9), Ok(3), Ok(5)));
/// assert_eq!(8, add_and_mul(None, Some(15), Ok(3), Ok(5)));
/// ```
pub fn run<R, F: FnOnce() -> R>(f: F) -> R {
f()
}
/// Apply a function to a value in a method chain
pub trait ApplyExt: Sized {
/// Apply a function to a value in a method chain
///
/// # Examples
///
/// See [mutating].
fn apply<R, F>(self, f: F) -> R
where
F: FnOnce(Self) -> R;
}
impl<T: Sized> ApplyExt for T {
fn apply<R, F>(self, f: F) -> R
where
F: FnOnce(Self) -> R,
{
f(self)
}
}

View File

@@ -1,6 +1,262 @@
//! Helpers for performing IO
//!
//! # IO Error handling helpers tutorial
//!
//! ```
//! use std::io::ErrorKind as EK;
//!
//! // It can be a bit hard to use IO errors in match statements
//!
//! fn io_placeholder() -> std::io::Result<()> {
//! Ok(())
//! }
//!
//! loop {
//! match io_placeholder() {
//! Ok(()) => break,
//! // All errors are unreachable; just here for demo purposes
//! Err(e) if e.kind() == EK::Interrupted => continue,
//! Err(e) if e.kind() == EK::WouldBlock => {
//! panic!("This particular function is not designed to be used in nonblocking code!");
//! }
//! Err(e) => Err(e)?,
//! }
//! }
//!
//! // For this reason this module contains various helper functions to make
//! // matching on error kinds a bit less repetitive. [IoResultKindHintExt::io_err_kind_hint]
//! // provides the basic functionality for use mostly with std::io::Result
//!
//! use rosenpass_util::io::IoResultKindHintExt;
//!
//! loop {
//! match io_placeholder().io_err_kind_hint() {
//! Ok(()) => break,
//! // All errors are unreachable; just here for demo purposes
//! Err((_, EK::Interrupted)) => continue,
//! Err((_, EK::WouldBlock)) => {
//! // Unreachable, just here for explanation purposes
//! panic!("This particular function is not designed to be used in nonblocking code!");
//! }
//! Err((e, _)) => Err(e)?,
//! }
//! }
//!
//! // The trait can be customized; firstly, you can use IoErrorKind
//! // for error types that can be fully represented as std::io::ErrorKind
//!
//! use rosenpass_util::io::IoErrorKind;
//!
//! #[derive(thiserror::Error, Debug, PartialEq, Eq)]
//! enum MyErrno {
//! #[error("Got interrupted")]
//! Interrupted,
//! #[error("In nonblocking mode")]
//! WouldBlock,
//! }
//!
//! impl IoErrorKind for MyErrno {
//! fn io_error_kind(&self) -> std::io::ErrorKind {
//! use MyErrno as ME;
//! match self {
//! ME::Interrupted => EK::Interrupted,
//! ME::WouldBlock => EK::WouldBlock,
//! }
//! }
//! }
//!
//! assert_eq!(
//! EK::Interrupted,
//! std::io::Error::new(EK::Interrupted, "artificially interrupted").io_error_kind()
//! );
//! assert_eq!(EK::Interrupted, MyErrno::Interrupted.io_error_kind());
//! assert_eq!(EK::WouldBlock, MyErrno::WouldBlock.io_error_kind());
//!
//! // And when an error can not fully be represented as an std::io::ErrorKind,
//! // you can still use [TryIoErrorKind]
//!
//! use rosenpass_util::io::TryIoErrorKind;
//!
//! #[derive(thiserror::Error, Debug, PartialEq, Eq)]
//! enum MyErrnoOrBlue {
//! #[error("Got interrupted")]
//! Interrupted,
//! #[error("In nonblocking mode")]
//! WouldBlock,
//! #[error("I am feeling blue")]
//! FeelingBlue,
//! }
//!
//! impl TryIoErrorKind for MyErrnoOrBlue {
//! fn try_io_error_kind(&self) -> Option<std::io::ErrorKind> {
//! use MyErrnoOrBlue as ME;
//! match self {
//! ME::Interrupted => Some(EK::Interrupted),
//! ME::WouldBlock => Some(EK::WouldBlock),
//! ME::FeelingBlue => None,
//! }
//! }
//! }
//!
//! assert_eq!(
//! Some(EK::Interrupted),
//! MyErrnoOrBlue::Interrupted.try_io_error_kind()
//! );
//! assert_eq!(
//! Some(EK::WouldBlock),
//! MyErrnoOrBlue::WouldBlock.try_io_error_kind()
//! );
//! assert_eq!(None, MyErrnoOrBlue::FeelingBlue.try_io_error_kind());
//!
//! // TryIoErrorKind is automatically implemented for all types that implement
//! // IoErrorKind
//!
//! assert_eq!(
//! Some(EK::Interrupted),
//! std::io::Error::new(EK::Interrupted, "artificially interrupted").try_io_error_kind()
//! );
//! assert_eq!(
//! Some(EK::Interrupted),
//! MyErrno::Interrupted.try_io_error_kind()
//! );
//! assert_eq!(
//! Some(EK::WouldBlock),
//! MyErrno::WouldBlock.try_io_error_kind()
//! );
//!
//! // By implementing IoErrorKind, we can automatically make use of IoResultKindHintExt<T>
//! // with our custom error type
//!
//! //use rosenpass_util::io::IoResultKindHintExt;
//!
//! assert_eq!(
//! Ok::<_, MyErrno>(42).io_err_kind_hint(),
//! Ok(42));
//! assert!(matches!(
//! Err::<(), _>(std::io::Error::new(EK::Interrupted, "artificially interrupted")).io_err_kind_hint(),
//! Err((err, EK::Interrupted)) if format!("{err:?}") == "Custom { kind: Interrupted, error: \"artificially interrupted\" }"));
//! assert_eq!(
//! Err::<(), _>(MyErrno::Interrupted).io_err_kind_hint(),
//! Err((MyErrno::Interrupted, EK::Interrupted)));
//!
//! // Correspondingly, TryIoResultKindHintExt can be used for Results with Errors
//! // that implement TryIoErrorKind
//!
//! use crate::rosenpass_util::io::TryIoResultKindHintExt;
//!
//! assert_eq!(
//! Ok::<_, MyErrnoOrBlue>(42).try_io_err_kind_hint(),
//! Ok(42));
//! assert_eq!(
//! Err::<(), _>(MyErrnoOrBlue::Interrupted).try_io_err_kind_hint(),
//! Err((MyErrnoOrBlue::Interrupted, Some(EK::Interrupted))));
//! assert_eq!(
//! Err::<(), _>(MyErrnoOrBlue::FeelingBlue).try_io_err_kind_hint(),
//! Err((MyErrnoOrBlue::FeelingBlue, None)));
//!
//! // SubstituteForIoErrorKindExt serves as a helper to handle specific ErrorKinds
//! // using a method chaining style. It works on anything that implements TryIoErrorKind.
//!
//! use rosenpass_util::io::SubstituteForIoErrorKindExt;
//!
//! assert_eq!(Ok(42),
//! Err(MyErrnoOrBlue::Interrupted)
//! .substitute_for_ioerr_kind_with(EK::Interrupted, || 42));
//!
//! assert_eq!(Err(MyErrnoOrBlue::WouldBlock),
//! Err(MyErrnoOrBlue::WouldBlock)
//! .substitute_for_ioerr_kind_with(EK::Interrupted, || 42));
//!
//! // The other functions in SubstituteForIoErrorKindExt are mostly just wrappers,
//! // getting the same job done with minor convenience
//!
//! // Plain Ok() value instead of function
//! assert_eq!(Ok(42),
//! Err(MyErrnoOrBlue::Interrupted)
//! .substitute_for_ioerr_kind(EK::Interrupted, 42));
//! assert_eq!(Err(MyErrnoOrBlue::WouldBlock),
//! Err(MyErrnoOrBlue::WouldBlock)
//! .substitute_for_ioerr_kind(EK::Interrupted, 42));
//!
//! // For specific errors
//! assert_eq!(Ok(42),
//! Err(MyErrnoOrBlue::Interrupted)
//! .substitute_for_ioerr_interrupted_with(|| 42)
//! .substitute_for_ioerr_wouldblock_with(|| 23));
//! assert_eq!(Ok(23),
//! Err(MyErrnoOrBlue::WouldBlock)
//! .substitute_for_ioerr_interrupted_with(|| 42)
//! .substitute_for_ioerr_wouldblock_with(|| 23));
//! assert_eq!(Err(MyErrnoOrBlue::FeelingBlue),
//! Err(MyErrnoOrBlue::FeelingBlue)
//! .substitute_for_ioerr_interrupted_with(|| 42)
//! .substitute_for_ioerr_wouldblock_with(|| 23));
//!
//! // And for specific errors without the function call
//! assert_eq!(Ok(42),
//! Err(MyErrnoOrBlue::Interrupted)
//! .substitute_for_ioerr_interrupted(42)
//! .substitute_for_ioerr_wouldblock(23));
//! assert_eq!(Ok(23),
//! Err(MyErrnoOrBlue::WouldBlock)
//! .substitute_for_ioerr_interrupted(42)
//! .substitute_for_ioerr_wouldblock(23));
//! assert_eq!(Err(MyErrnoOrBlue::FeelingBlue),
//! Err(MyErrnoOrBlue::FeelingBlue)
//! .substitute_for_ioerr_interrupted(42)
//! .substitute_for_ioerr_wouldblock(23));
//!
//! // handle_interrupted automates the process of handling ErrorKind::Interrupted
//! // in cases where the action should simply be rerun; it can handle any error type
//! // that implements TryIoErrorKind. It lets other errors and Ok(_) pass through.
//!
//! use rosenpass_util::io::handle_interrupted;
//!
//! let mut ctr = 0u32;
//! let mut simulate_io = || -> Result<u32, MyErrnoOrBlue> {
//! let r = match ctr % 6 {
//! 1 => Ok(42),
//! 3 => Err(MyErrnoOrBlue::FeelingBlue),
//! 5 => Err(MyErrnoOrBlue::WouldBlock),
//! _ => Err(MyErrnoOrBlue::Interrupted),
//! };
//! ctr += 1;
//! r
//! };
//!
//! assert_eq!(Ok(Some(42)), handle_interrupted(&mut simulate_io));
//! assert_eq!(Err(MyErrnoOrBlue::FeelingBlue), handle_interrupted(&mut simulate_io));
//! assert_eq!(Err(MyErrnoOrBlue::WouldBlock), handle_interrupted(&mut simulate_io));
//! // never returns None
//!
//! // nonblocking_handle_io_errors performs the same job, except that
//! // WouldBlock is substituted with Ok(None)
//!
//! use rosenpass_util::io::nonblocking_handle_io_errors;
//!
//! assert_eq!(Ok(Some(42)), nonblocking_handle_io_errors(&mut simulate_io));
//! assert_eq!(Err(MyErrnoOrBlue::FeelingBlue), nonblocking_handle_io_errors(&mut simulate_io));
//! assert_eq!(Ok(None), nonblocking_handle_io_errors(&mut simulate_io));
//!
//! Ok::<_, anyhow::Error>(())
//! ```
use std::{borrow::Borrow, io};
use anyhow::ensure;
use zerocopy::AsBytes;
/// Generic trait for accessing [std::io::Error::kind]
///
/// # Examples
///
/// See [tutorial in the module](self).
pub trait IoErrorKind {
/// Conversion to [std::io::Error::kind]
///
/// # Examples
///
/// See [tutorial in the module](self).
fn io_error_kind(&self) -> io::ErrorKind;
}
@@ -10,7 +266,17 @@ impl<T: Borrow<io::Error>> IoErrorKind for T {
}
}
/// Generic trait for accessing [std::io::Error::kind] where it may not be present
///
/// # Examples
///
/// See [tutorial in the module](self).
pub trait TryIoErrorKind {
/// Conversion to [std::io::Error::kind] where it may not be present
///
/// # Examples
///
/// See [tutorial in the module](self).
fn try_io_error_kind(&self) -> Option<io::ErrorKind>;
}
@@ -20,8 +286,19 @@ impl<T: IoErrorKind> TryIoErrorKind for T {
}
}
/// Helper for accessing [std::io::Error::kind] in Results
///
/// # Examples
///
/// See [tutorial in the module](self).
pub trait IoResultKindHintExt<T>: Sized {
/// Error type including the ErrorKind hint
type Error;
/// Helper for accessing [std::io::Error::kind] in Results
///
/// # Examples
///
/// See [tutorial in the module](self).
fn io_err_kind_hint(self) -> Result<T, (Self::Error, io::ErrorKind)>;
}
@@ -35,8 +312,19 @@ impl<T, E: IoErrorKind> IoResultKindHintExt<T> for Result<T, E> {
}
}
/// Helper for accessing [std::io::Error::kind] in Results where it may not be present
///
/// # Examples
///
/// See [tutorial in the module](self).
pub trait TryIoResultKindHintExt<T>: Sized {
/// Error type including the ErrorKind hint
type Error;
/// Helper for accessing [std::io::Error::kind] in Results where it may not be present
///
/// # Examples
///
/// See [tutorial in the module](self).
fn try_io_err_kind_hint(self) -> Result<T, (Self::Error, Option<io::ErrorKind>)>;
}
@@ -50,11 +338,105 @@ impl<T, E: TryIoErrorKind> TryIoResultKindHintExt<T> for Result<T, E> {
}
}
/// Helper for working with IO results using a method chaining style
///
/// # Examples
///
/// See [tutorial in the module](self).
pub trait SubstituteForIoErrorKindExt<T>: Sized {
/// Error type produced by methods in this trait
type Error;
/// Substitute errors with a certain [std::io::ErrorKind] by a value produced by a function
///
/// # Examples
///
/// See [tutorial in the module](self).
fn substitute_for_ioerr_kind_with<F: FnOnce() -> T>(
self,
kind: io::ErrorKind,
f: F,
) -> Result<T, Self::Error>;
/// Substitute errors with a certain [std::io::ErrorKind] by a value
///
/// # Examples
///
/// See [tutorial in the module](self).
fn substitute_for_ioerr_kind(self, kind: io::ErrorKind, v: T) -> Result<T, Self::Error> {
self.substitute_for_ioerr_kind_with(kind, || v)
}
/// Substitute errors with [std::io::ErrorKind] [std::io::ErrorKind::Interrupted] by a value
/// produced by a function
///
/// # Examples
///
/// See [tutorial in the module](self).
fn substitute_for_ioerr_interrupted_with<F: FnOnce() -> T>(
self,
f: F,
) -> Result<T, Self::Error> {
self.substitute_for_ioerr_kind_with(io::ErrorKind::Interrupted, f)
}
/// Substitute errors with [std::io::ErrorKind] [std::io::ErrorKind::Interrupted] by a value
///
/// # Examples
///
/// See [tutorial in the module](self).
fn substitute_for_ioerr_interrupted(self, v: T) -> Result<T, Self::Error> {
self.substitute_for_ioerr_interrupted_with(|| v)
}
/// Substitute errors with [std::io::ErrorKind] [std::io::ErrorKind::WouldBlock] by a value
/// produced by a function
///
/// # Examples
///
/// See [tutorial in the module](self).
fn substitute_for_ioerr_wouldblock_with<F: FnOnce() -> T>(
self,
f: F,
) -> Result<T, Self::Error> {
self.substitute_for_ioerr_kind_with(io::ErrorKind::WouldBlock, f)
}
/// Substitute errors with [std::io::ErrorKind] [std::io::ErrorKind::WouldBlock] by a value
///
/// # Examples
///
/// See [tutorial in the module](self).
fn substitute_for_ioerr_wouldblock(self, v: T) -> Result<T, Self::Error> {
self.substitute_for_ioerr_wouldblock_with(|| v)
}
}
impl<T, E: TryIoErrorKind> SubstituteForIoErrorKindExt<T> for Result<T, E> {
type Error = E;
fn substitute_for_ioerr_kind_with<F: FnOnce() -> T>(
self,
kind: io::ErrorKind,
f: F,
) -> Result<T, Self::Error> {
match self.try_io_err_kind_hint() {
Ok(v) => Ok(v),
Err((_, Some(k))) if k == kind => Ok(f()),
Err((e, _)) => Err(e),
}
}
}
/// Automatically handles `std::io::ErrorKind::Interrupted`.
///
/// - If there is no error (i.e. on `Ok(r)`), the function will return `Ok(Some(r))`
/// - `Interrupted` is handled internally, by retrying the IO operation
/// - Other errors are returned as is
///
/// # Examples
///
/// See [tutorial in the module](self).
pub fn handle_interrupted<R, E, F>(mut iofn: F) -> Result<Option<R>, E>
where
E: TryIoErrorKind,
@@ -76,6 +458,10 @@ where
/// - `Interrupted` is handled internally, by retrying the IO operation
/// - `WouldBlock` is handled by returning `Ok(None)`,
/// - Other errors are returned as is
///
/// # Examples
///
/// See [tutorial in the module](self).
pub fn nonblocking_handle_io_errors<R, E, F>(mut iofn: F) -> Result<Option<R>, E>
where
E: TryIoErrorKind,
@@ -92,6 +478,7 @@ where
}
}
/// [std:io::Read] extension trait for call with [nonblocking_handle_io_errors] applied
pub trait ReadNonblockingWithBoringErrorsHandledExt {
/// Convenience wrapper using [nonblocking_handle_io_errors] with [std::io::Read]
fn read_nonblocking_with_boring_errors_handled(
@@ -108,3 +495,41 @@ impl<T: io::Read> ReadNonblockingWithBoringErrorsHandledExt for T {
nonblocking_handle_io_errors(|| self.read(buf))
}
}
/// Extension trait for [std::io::Read] providing the ability to read
/// a buffer exactly
pub trait ReadExt {
/// Version of [std::io::Read::read_exact] that throws if there
/// is extra data in the stream to be read
///
/// # Examples
///
/// ```
/// use rosenpass_util::io::ReadExt;
///
/// let mut buf = [0u8; 4];
///
/// // Over or underlong buffer yields error
/// assert!(b"12345".as_slice().read_exact_til_end(&mut buf).is_err());
/// assert!(b"123".as_slice().read_exact_til_end(&mut buf).is_err());
///
/// // Buffer of precisely the right length leads to successful read
/// assert!(b"1234".as_slice().read_exact_til_end(&mut buf).is_ok());
/// assert_eq!(b"1234", &buf);
/// ```
fn read_exact_til_end(&mut self, buf: &mut [u8]) -> anyhow::Result<()>;
}
impl<T> ReadExt for T
where
T: std::io::Read,
{
fn read_exact_til_end(&mut self, buf: &mut [u8]) -> anyhow::Result<()> {
self.read_exact(buf)?;
ensure!(
self.read(&mut [0u8; 8])? == 0,
"Read source longer than buffer"
);
Ok(())
}
}

View File

@@ -8,28 +8,37 @@ use crate::{
result::ensure_or,
};
/// Size in bytes of a message header carrying length information
pub const HEADER_SIZE: usize = std::mem::size_of::<u64>();
#[derive(Error, Debug)]
/// Error enum to represent various boundary sanity check failures during buffer operations
pub enum SanityError {
#[error("Offset is out of read buffer bounds")]
/// Error indicating that the given offset exceeds the bounds of the read buffer
OutOfBufferBounds,
#[error("Offset is out of message buffer bounds")]
/// Error indicating that the given offset exceeds the bounds of the message buffer
OutOfMessageBounds,
}
#[derive(Error, Debug)]
#[error("Message too large ({msg_size} bytes) for buffer ({buf_size} bytes)")]
/// Error indicating that message exceeds available buffer space
pub struct MessageTooLargeError {
msg_size: usize,
buf_size: usize,
}
impl MessageTooLargeError {
/// Creates a new MessageTooLargeError with the given message and buffer sizes
pub fn new(msg_size: usize, buf_size: usize) -> Self {
Self { msg_size, buf_size }
}
/// Ensures that the message size fits within the buffer size
///
/// Returns Ok(()) if the message fits, otherwise returns an error with size details
pub fn ensure(msg_size: usize, buf_size: usize) -> Result<(), Self> {
let err = MessageTooLargeError { msg_size, buf_size };
ensure_or(msg_size <= buf_size, err)
@@ -37,12 +46,16 @@ impl MessageTooLargeError {
}
#[derive(Debug)]
/// Return type for ReadFromIo operations that contains the number of bytes read and an optional message slice
pub struct ReadFromIoReturn<'a> {
/// Number of bytes read from the input
pub bytes_read: usize,
/// Optional slice containing the complete message, if one was read
pub message: Option<&'a mut [u8]>,
}
impl<'a> ReadFromIoReturn<'a> {
/// Creates a new ReadFromIoReturn with the given number of bytes read and optional message slice.
pub fn new(bytes_read: usize, message: Option<&'a mut [u8]>) -> Self {
Self {
bytes_read,
@@ -52,9 +65,12 @@ impl<'a> ReadFromIoReturn<'a> {
}
#[derive(Debug, Error)]
/// An enum representing errors that can occur during read operations from I/O
pub enum ReadFromIoError {
/// Error occurred while reading from the underlying I/O stream
#[error("Error reading from the underlying stream")]
IoError(#[from] io::Error),
/// Error occurred because message size exceeded buffer capacity
#[error("Message size out of buffer bounds")]
MessageTooLargeError(#[from] MessageTooLargeError),
}
@@ -69,6 +85,10 @@ impl TryIoErrorKind for ReadFromIoError {
}
#[derive(Debug, Default, Clone)]
/// A decoder for length-prefixed messages
///
/// This struct provides functionality to decode messages that are prefixed with their length.
/// It maintains internal state for header information, the message buffer, and current offset.
pub struct LengthPrefixDecoder<Buf: BorrowMut<[u8]>> {
header: [u8; HEADER_SIZE],
buf: Buf,
@@ -76,25 +96,33 @@ pub struct LengthPrefixDecoder<Buf: BorrowMut<[u8]>> {
}
impl<Buf: BorrowMut<[u8]>> LengthPrefixDecoder<Buf> {
/// Creates a new LengthPrefixDecoder with the given buffer
pub fn new(buf: Buf) -> Self {
let header = Default::default();
let off = 0;
Self { header, buf, off }
}
/// Clears and zeroes all internal state
pub fn clear(&mut self) {
self.zeroize()
}
/// Creates a new LengthPrefixDecoder from its component parts
pub fn from_parts(header: [u8; HEADER_SIZE], buf: Buf, off: usize) -> Self {
Self { header, buf, off }
}
/// Consumes the decoder and returns its component parts
pub fn into_parts(self) -> ([u8; HEADER_SIZE], Buf, usize) {
let Self { header, buf, off } = self;
(header, buf, off)
}
/// Reads a complete message from the given reader into the decoder.
///
/// Retries on interrupts and returns the decoded message buffer on success.
/// Returns an error if the read fails or encounters an unexpected EOF.
pub fn read_all_from_stdio<R: io::Read>(
&mut self,
mut r: R,
@@ -125,6 +153,7 @@ impl<Buf: BorrowMut<[u8]>> LengthPrefixDecoder<Buf> {
}
}
/// Reads from the given reader into the decoder's internal buffers
pub fn read_from_stdio<R: io::Read>(
&mut self,
mut r: R,
@@ -150,6 +179,7 @@ impl<Buf: BorrowMut<[u8]>> LengthPrefixDecoder<Buf> {
})
}
/// Gets the next buffer slice that can be written to
pub fn next_slice_to_write_to(&mut self) -> Result<Option<&mut [u8]>, MessageTooLargeError> {
fn some_if_nonempty(buf: &mut [u8]) -> Option<&mut [u8]> {
match buf.is_empty() {
@@ -172,6 +202,7 @@ impl<Buf: BorrowMut<[u8]>> LengthPrefixDecoder<Buf> {
Ok(None)
}
/// Advances the internal offset by the specified number of bytes
pub fn advance(&mut self, count: usize) -> Result<(), SanityError> {
let off = self.off + count;
let msg_off = off.saturating_sub(HEADER_SIZE);
@@ -189,6 +220,7 @@ impl<Buf: BorrowMut<[u8]>> LengthPrefixDecoder<Buf> {
Ok(())
}
/// Ensures that the internal message buffer is large enough for the message size in the header
pub fn ensure_sufficient_msg_buffer(&self) -> Result<(), MessageTooLargeError> {
let buf_size = self.message_buffer().len();
let msg_size = match self.get_header() {
@@ -198,43 +230,53 @@ impl<Buf: BorrowMut<[u8]>> LengthPrefixDecoder<Buf> {
MessageTooLargeError::ensure(msg_size, buf_size)
}
/// Returns a reference to the header buffer
pub fn header_buffer(&self) -> &[u8] {
&self.header[..]
}
/// Returns a mutable reference to the header buffer
pub fn header_buffer_mut(&mut self) -> &mut [u8] {
&mut self.header[..]
}
/// Returns a reference to the message buffer
pub fn message_buffer(&self) -> &[u8] {
self.buf.borrow()
}
/// Returns a mutable reference to the message buffer
pub fn message_buffer_mut(&mut self) -> &mut [u8] {
self.buf.borrow_mut()
}
/// Returns the number of bytes read so far
pub fn bytes_read(&self) -> &usize {
&self.off
}
/// Consumes the decoder and returns just the message buffer
pub fn into_message_buffer(self) -> Buf {
let Self { buf, .. } = self;
buf
}
/// Returns the current offset into the header buffer
pub fn header_buffer_offset(&self) -> usize {
min(self.off, HEADER_SIZE)
}
/// Returns the current offset into the message buffer
pub fn message_buffer_offset(&self) -> usize {
self.off.saturating_sub(HEADER_SIZE)
}
/// Returns whether a complete header has been read
pub fn has_header(&self) -> bool {
self.header_buffer_offset() == HEADER_SIZE
}
/// Returns whether a complete message has been read
pub fn has_message(&self) -> Result<bool, MessageTooLargeError> {
self.ensure_sufficient_msg_buffer()?;
let msg_size = match self.get_header() {
@@ -244,46 +286,55 @@ impl<Buf: BorrowMut<[u8]>> LengthPrefixDecoder<Buf> {
Ok(self.message_buffer_avail().len() == msg_size)
}
/// Returns a slice of the available data in the header buffer
pub fn header_buffer_avail(&self) -> &[u8] {
let off = self.header_buffer_offset();
&self.header_buffer()[..off]
}
/// Returns a mutable slice of the available data in the header buffer
pub fn header_buffer_avail_mut(&mut self) -> &mut [u8] {
let off = self.header_buffer_offset();
&mut self.header_buffer_mut()[..off]
}
/// Returns a slice of the remaining space in the header buffer
pub fn header_buffer_left(&self) -> &[u8] {
let off = self.header_buffer_offset();
&self.header_buffer()[off..]
}
/// Returns a mutable slice of the remaining space in the header buffer
pub fn header_buffer_left_mut(&mut self) -> &mut [u8] {
let off = self.header_buffer_offset();
&mut self.header_buffer_mut()[off..]
}
/// Returns a slice of the available data in the message buffer
pub fn message_buffer_avail(&self) -> &[u8] {
let off = self.message_buffer_offset();
&self.message_buffer()[..off]
}
/// Returns a mutable slice of the available data in the message buffer
pub fn message_buffer_avail_mut(&mut self) -> &mut [u8] {
let off = self.message_buffer_offset();
&mut self.message_buffer_mut()[..off]
}
/// Returns a slice of the remaining space in the message buffer
pub fn message_buffer_left(&self) -> &[u8] {
let off = self.message_buffer_offset();
&self.message_buffer()[off..]
}
/// Returns a mutable slice of the remaining space in the message buffer
pub fn message_buffer_left_mut(&mut self) -> &mut [u8] {
let off = self.message_buffer_offset();
&mut self.message_buffer_mut()[off..]
}
/// Returns the message size from the header if available
pub fn get_header(&self) -> Option<usize> {
match self.header_buffer_offset() == HEADER_SIZE {
false => None,
@@ -291,19 +342,23 @@ impl<Buf: BorrowMut<[u8]>> LengthPrefixDecoder<Buf> {
}
}
/// Returns the size of the message if header is available
pub fn message_size(&self) -> Option<usize> {
self.get_header()
}
/// Returns the total size of the encoded message including header
pub fn encoded_message_bytes(&self) -> Option<usize> {
self.message_size().map(|sz| sz + HEADER_SIZE)
}
/// Returns a slice of the message fragment if available
pub fn message_fragment(&self) -> Result<Option<&[u8]>, MessageTooLargeError> {
self.ensure_sufficient_msg_buffer()?;
Ok(self.message_size().map(|sz| &self.message_buffer()[..sz]))
}
/// Returns a mutable slice of the message fragment if available
pub fn message_fragment_mut(&mut self) -> Result<Option<&mut [u8]>, MessageTooLargeError> {
self.ensure_sufficient_msg_buffer()?;
Ok(self
@@ -311,12 +366,14 @@ impl<Buf: BorrowMut<[u8]>> LengthPrefixDecoder<Buf> {
.map(|sz| &mut self.message_buffer_mut()[..sz]))
}
/// Returns a slice of the available data in the message fragment
pub fn message_fragment_avail(&self) -> Result<Option<&[u8]>, MessageTooLargeError> {
let off = self.message_buffer_avail().len();
self.message_fragment()
.map(|frag| frag.map(|frag| &frag[..off]))
}
/// Returns a mutable slice of the available data in the message fragment
pub fn message_fragment_avail_mut(
&mut self,
) -> Result<Option<&mut [u8]>, MessageTooLargeError> {
@@ -325,24 +382,28 @@ impl<Buf: BorrowMut<[u8]>> LengthPrefixDecoder<Buf> {
.map(|frag| frag.map(|frag| &mut frag[..off]))
}
/// Returns a slice of the remaining space in the message fragment
pub fn message_fragment_left(&self) -> Result<Option<&[u8]>, MessageTooLargeError> {
let off = self.message_buffer_avail().len();
self.message_fragment()
.map(|frag| frag.map(|frag| &frag[off..]))
}
/// Returns a mutable slice of the remaining space in the message fragment
pub fn message_fragment_left_mut(&mut self) -> Result<Option<&mut [u8]>, MessageTooLargeError> {
let off = self.message_buffer_avail().len();
self.message_fragment_mut()
.map(|frag| frag.map(|frag| &mut frag[off..]))
}
/// Returns a slice of the complete message if available
pub fn message(&self) -> Result<Option<&[u8]>, MessageTooLargeError> {
let sz = self.message_size();
self.message_fragment_avail()
.map(|frag_opt| frag_opt.and_then(|frag| (frag.len() == sz?).then_some(frag)))
}
/// Returns a mutable slice of the complete message if available
pub fn message_mut(&mut self) -> Result<Option<&mut [u8]>, MessageTooLargeError> {
let sz = self.message_size();
self.message_fragment_avail_mut()

View File

@@ -9,46 +9,61 @@ use zeroize::Zeroize;
use crate::{io::IoResultKindHintExt, result::ensure_or};
/// Size of the length prefix header in bytes - equal to the size of a u64
pub const HEADER_SIZE: usize = std::mem::size_of::<u64>();
#[derive(Error, Debug, Clone, Copy)]
#[error("Write position is out of buffer bounds")]
/// Error type indicating that a write position is beyond the boundaries of the allocated buffer
pub struct PositionOutOfBufferBounds;
#[derive(Error, Debug, Clone, Copy)]
#[error("Write position is out of message bounds")]
/// Error type indicating that a write position is beyond the boundaries of the message
pub struct PositionOutOfMessageBounds;
#[derive(Error, Debug, Clone, Copy)]
#[error("Write position is out of header bounds")]
/// Error type indicating that a write position is beyond the boundaries of the header
pub struct PositionOutOfHeaderBounds;
#[derive(Error, Debug, Clone, Copy)]
#[error("Message length is bigger than buffer length")]
/// Error type indicating that the message length is larger than the available buffer space
pub struct MessageTooLarge;
#[derive(Error, Debug, Clone, Copy)]
/// Error type for message length sanity checks
pub enum MessageLenSanityError {
/// Error indicating position is beyond message boundaries
#[error("{0:?}")]
PositionOutOfMessageBounds(#[from] PositionOutOfMessageBounds),
/// Error indicating message length exceeds buffer capacity
#[error("{0:?}")]
MessageTooLarge(#[from] MessageTooLarge),
}
#[derive(Error, Debug, Clone, Copy)]
/// Error type for position bounds checking
pub enum PositionSanityError {
/// Error indicating position is beyond message boundaries
#[error("{0:?}")]
PositionOutOfMessageBounds(#[from] PositionOutOfMessageBounds),
/// Error indicating position is beyond buffer boundaries
#[error("{0:?}")]
PositionOutOfBufferBounds(#[from] PositionOutOfBufferBounds),
}
#[derive(Error, Debug, Clone, Copy)]
/// Error type combining all sanity check errors
pub enum SanityError {
/// Error indicating position is beyond message boundaries
#[error("{0:?}")]
PositionOutOfMessageBounds(#[from] PositionOutOfMessageBounds),
/// Error indicating position is beyond buffer boundaries
#[error("{0:?}")]
PositionOutOfBufferBounds(#[from] PositionOutOfBufferBounds),
/// Error indicating message length exceeds buffer capacity
#[error("{0:?}")]
MessageTooLarge(#[from] MessageTooLarge),
}
@@ -86,12 +101,16 @@ impl From<PositionSanityError> for SanityError {
}
}
/// Result of a write operation on an IO stream
pub struct WriteToIoReturn {
/// Number of bytes successfully written in this operation
pub bytes_written: usize,
/// Whether the write operation has completed fully
pub done: bool,
}
#[derive(Clone, Copy, Debug)]
/// Length-prefixed encoder that adds a length header to data before writing
pub struct LengthPrefixEncoder<Buf: Borrow<[u8]>> {
buf: Buf,
header: [u8; HEADER_SIZE],
@@ -99,6 +118,7 @@ pub struct LengthPrefixEncoder<Buf: Borrow<[u8]>> {
}
impl<Buf: Borrow<[u8]>> LengthPrefixEncoder<Buf> {
/// Creates a new encoder from a buffer
pub fn from_buffer(buf: Buf) -> Self {
let (header, pos) = ([0u8; HEADER_SIZE], 0);
let mut r = Self { buf, header, pos };
@@ -106,6 +126,7 @@ impl<Buf: Borrow<[u8]>> LengthPrefixEncoder<Buf> {
r
}
/// Creates a new encoder using the full buffer as a message
pub fn from_message(msg: Buf) -> Self {
let mut r = Self::from_buffer(msg);
r.restart_write_with_new_message(r.buffer_bytes().len())
@@ -113,23 +134,27 @@ impl<Buf: Borrow<[u8]>> LengthPrefixEncoder<Buf> {
r
}
/// Creates a new encoder using part of the buffer as a message
pub fn from_short_message(msg: Buf, len: usize) -> Result<Self, MessageLenSanityError> {
let mut r = Self::from_message(msg);
r.set_message_len(len)?;
Ok(r)
}
/// Creates a new encoder from buffer, message length and write position
pub fn from_parts(buf: Buf, len: usize, pos: usize) -> Result<Self, SanityError> {
let mut r = Self::from_buffer(buf);
r.set_msg_len_and_position(len, pos)?;
Ok(r)
}
/// Consumes the encoder and returns the underlying buffer
pub fn into_buffer(self) -> Buf {
let Self { buf, .. } = self;
buf
}
/// Consumes the encoder and returns buffer, message length and write position
pub fn into_parts(self) -> (Buf, usize, usize) {
let len = self.message_len();
let pos = self.writing_position();
@@ -137,11 +162,13 @@ impl<Buf: Borrow<[u8]>> LengthPrefixEncoder<Buf> {
(buf, len, pos)
}
/// Resets the encoder state
pub fn clear(&mut self) {
self.set_msg_len_and_position(0, 0).unwrap();
self.set_message_offset(0).unwrap();
}
/// Writes the full message to an IO writer, retrying on interrupts
pub fn write_all_to_stdio<W: io::Write>(&mut self, mut w: W) -> io::Result<()> {
use io::ErrorKind as K;
loop {
@@ -158,6 +185,7 @@ impl<Buf: Borrow<[u8]>> LengthPrefixEncoder<Buf> {
}
}
/// Writes the next chunk of data to an IO writer and returns number of bytes written and completion status
pub fn write_to_stdio<W: io::Write>(&mut self, mut w: W) -> io::Result<WriteToIoReturn> {
if self.exhausted() {
return Ok(WriteToIoReturn {
@@ -177,10 +205,12 @@ impl<Buf: Borrow<[u8]>> LengthPrefixEncoder<Buf> {
})
}
/// Resets write position to start for restarting output
pub fn restart_write(&mut self) {
self.set_writing_position(0).unwrap()
}
/// Resets write position to start and updates message length for restarting with new data
pub fn restart_write_with_new_message(
&mut self,
len: usize,
@@ -189,6 +219,7 @@ impl<Buf: Borrow<[u8]>> LengthPrefixEncoder<Buf> {
.map_err(|e| e.try_into().unwrap())
}
/// Returns the next unwritten slice of data to write from header or message
pub fn next_slice_to_write(&self) -> &[u8] {
let s = self.header_left();
if !s.is_empty() {
@@ -203,66 +234,82 @@ impl<Buf: Borrow<[u8]>> LengthPrefixEncoder<Buf> {
&[]
}
/// Returns true if all data including header and message has been written
pub fn exhausted(&self) -> bool {
self.next_slice_to_write().is_empty()
}
/// Returns slice containing full message data
pub fn message(&self) -> &[u8] {
&self.buffer_bytes()[..self.message_len()]
}
/// Returns slice containing written portion of length header
pub fn header_written(&self) -> &[u8] {
&self.header()[..self.header_offset()]
}
/// Returns slice containing unwritten portion of length header
pub fn header_left(&self) -> &[u8] {
&self.header()[self.header_offset()..]
}
/// Returns slice containing written portion of message data
pub fn message_written(&self) -> &[u8] {
&self.message()[..self.message_offset()]
}
/// Returns slice containing unwritten portion of message data
pub fn message_left(&self) -> &[u8] {
&self.message()[self.message_offset()..]
}
/// Returns reference to underlying buffer
pub fn buf(&self) -> &Buf {
&self.buf
}
/// Returns slice view of underlying buffer bytes
pub fn buffer_bytes(&self) -> &[u8] {
self.buf().borrow()
}
/// Decodes and returns length header value as u64
pub fn decode_header(&self) -> u64 {
u64::from_le_bytes(self.header)
}
/// Returns slice containing raw length header bytes
pub fn header(&self) -> &[u8; HEADER_SIZE] {
&self.header
}
/// Returns decoded message length from header
pub fn message_len(&self) -> usize {
self.decode_header() as usize
}
/// Returns total encoded size including header and message bytes
pub fn encoded_message_bytes(&self) -> usize {
self.message_len() + HEADER_SIZE
}
/// Returns current write position within header and message
pub fn writing_position(&self) -> usize {
self.pos
}
/// Returns write offset within length header bytes
pub fn header_offset(&self) -> usize {
min(self.writing_position(), HEADER_SIZE)
}
/// Returns write offset within message bytes
pub fn message_offset(&self) -> usize {
self.writing_position().saturating_sub(HEADER_SIZE)
}
/// Sets new length header bytes with bounds checking
pub fn set_header(&mut self, header: [u8; HEADER_SIZE]) -> Result<(), MessageLenSanityError> {
self.offset_transaction(|t| {
t.header = header;
@@ -272,14 +319,17 @@ impl<Buf: Borrow<[u8]>> LengthPrefixEncoder<Buf> {
})
}
/// Encodes and sets length header value with bounds checking
pub fn encode_and_set_header(&mut self, header: u64) -> Result<(), MessageLenSanityError> {
self.set_header(header.to_le_bytes())
}
/// Sets message lengthwith bounds checking
pub fn set_message_len(&mut self, len: usize) -> Result<(), MessageLenSanityError> {
self.encode_and_set_header(len as u64)
}
/// Sets write position with message and buffer bounds checking
pub fn set_writing_position(&mut self, pos: usize) -> Result<(), PositionSanityError> {
self.offset_transaction(|t| {
t.pos = pos;
@@ -289,20 +339,24 @@ impl<Buf: Borrow<[u8]>> LengthPrefixEncoder<Buf> {
})
}
/// Sets write position within header bytes with bounds checking
pub fn set_header_offset(&mut self, off: usize) -> Result<(), PositionOutOfHeaderBounds> {
ensure_or(off <= HEADER_SIZE, PositionOutOfHeaderBounds)?;
self.set_writing_position(off).unwrap();
Ok(())
}
/// Sets write position within message bytes with bounds checking
pub fn set_message_offset(&mut self, off: usize) -> Result<(), PositionSanityError> {
self.set_writing_position(off + HEADER_SIZE)
}
/// Advances write position by specified offset with bounds checking
pub fn advance(&mut self, off: usize) -> Result<(), PositionSanityError> {
self.set_writing_position(self.writing_position() + off)
}
/// Sets message length and write position with bounds checking
pub fn set_msg_len_and_position(&mut self, len: usize, pos: usize) -> Result<(), SanityError> {
self.pos = 0;
self.set_message_len(len)?;
@@ -347,24 +401,29 @@ impl<Buf: Borrow<[u8]>> LengthPrefixEncoder<Buf> {
}
impl<Buf: BorrowMut<[u8]>> LengthPrefixEncoder<Buf> {
/// Gets a mutable reference to the underlying buffer
pub fn buf_mut(&mut self) -> &mut Buf {
&mut self.buf
}
/// Gets the buffer as mutable bytes
pub fn buffer_bytes_mut(&mut self) -> &mut [u8] {
self.buf.borrow_mut()
}
/// Gets a mutable reference to the message slice
pub fn message_mut(&mut self) -> &mut [u8] {
let off = self.message_len();
&mut self.buffer_bytes_mut()[..off]
}
/// Gets a mutable reference to the written portion of the message
pub fn message_written_mut(&mut self) -> &mut [u8] {
let off = self.message_offset();
&mut self.message_mut()[..off]
}
/// Gets a mutable reference to the unwritten portion of the message
pub fn message_left_mut(&mut self) -> &mut [u8] {
let off = self.message_offset();
&mut self.message_mut()[off..]

View File

@@ -1,2 +1,4 @@
/// Module that handles decoding functionality
pub mod decoder;
/// Module that handles encoding functionality
pub mod encoder;

View File

@@ -1,16 +1,38 @@
#![warn(missing_docs)]
#![warn(clippy::missing_docs_in_private_items)]
#![recursion_limit = "256"]
//! Core utility functions and types used across the codebase.
/// Base64 encoding and decoding functionality.
pub mod b64;
/// Build-time utilities and macros.
pub mod build;
/// Control flow abstractions and utilities.
pub mod controlflow;
/// File descriptor utilities.
pub mod fd;
/// File system operations and handling.
pub mod file;
/// Functional programming utilities.
pub mod functional;
/// Input/output operations.
pub mod io;
/// Length prefix encoding schemes implementation.
pub mod length_prefix_encoding;
/// Memory manipulation and allocation utilities.
pub mod mem;
/// MIO integration utilities.
pub mod mio;
pub mod ord;
/// Extended Option type functionality.
pub mod option;
/// Extended Result type functionality.
pub mod result;
/// Time and duration utilities.
pub mod time;
/// Type-level numbers and arithmetic.
pub mod typenum;
/// Zero-copy serialization utilities.
pub mod zerocopy;
/// Memory wiping utilities.
pub mod zeroize;

View File

@@ -22,6 +22,7 @@ macro_rules! cat {
}
// TODO: consistent inout ordering
/// Copy all bytes from `src` to `dst`. The lengths must match.
pub fn cpy<T: BorrowMut<[u8]> + ?Sized, F: Borrow<[u8]> + ?Sized>(src: &F, dst: &mut T) {
dst.borrow_mut().copy_from_slice(src.borrow());
}
@@ -41,11 +42,13 @@ pub struct Forgetting<T> {
}
impl<T> Forgetting<T> {
/// Creates a new `Forgetting<T>` instance containing the given value.
pub fn new(value: T) -> Self {
let value = Some(value);
Self { value }
}
/// Extracts and returns the contained value, consuming self.
pub fn extract(mut self) -> T {
let mut value = None;
swap(&mut value, &mut self.value);
@@ -92,3 +95,71 @@ impl<T> Drop for Forgetting<T> {
forget(value)
}
}
/// A trait that provides a method to discard a value without explicitly handling its results.
pub trait DiscardResultExt {
/// Consumes and discards a value without doing anything with it.
fn discard_result(self);
}
impl<T> DiscardResultExt for T {
fn discard_result(self) {}
}
/// Trait that provides a method to explicitly forget values.
pub trait ForgetExt {
/// Consumes and forgets a value, preventing its destructor from running.
fn forget(self);
}
impl<T> ForgetExt for T {
fn forget(self) {
std::mem::forget(self)
}
}
/// Extension trait that provides methods for swapping values.
pub trait SwapWithExt {
/// Takes ownership of `other` and swaps its value with `self`, returning the original value.
fn swap_with(&mut self, other: Self) -> Self;
/// Swaps the values between `self` and `other` in place.
fn swap_with_mut(&mut self, other: &mut Self);
}
impl<T> SwapWithExt for T {
fn swap_with(&mut self, mut other: Self) -> Self {
self.swap_with_mut(&mut other);
other
}
fn swap_with_mut(&mut self, other: &mut Self) {
std::mem::swap(self, other)
}
}
/// Extension trait that provides methods for swapping values with default values.
pub trait SwapWithDefaultExt {
/// Takes the current value and replaces it with the default value, returning the original.
fn swap_with_default(&mut self) -> Self;
}
impl<T: Default> SwapWithDefaultExt for T {
fn swap_with_default(&mut self) -> Self {
self.swap_with(Self::default())
}
}
/// Extension trait that provides a method to explicitly move values.
pub trait MoveExt {
/// Deliberately move the value
///
/// Usually employed to enforce an object being
/// dropped after use.
fn move_here(self) -> Self;
}
impl<T: Sized> MoveExt for T {
fn move_here(self) -> Self {
self
}
}

View File

@@ -1,39 +0,0 @@
use mio::net::{UnixListener, UnixStream};
use rustix::fd::RawFd;
use crate::fd::claim_fd;
pub mod interest {
use mio::Interest;
pub const R: Interest = Interest::READABLE;
pub const W: Interest = Interest::WRITABLE;
pub const RW: Interest = R.add(W);
}
pub trait UnixListenerExt: Sized {
fn claim_fd(fd: RawFd) -> anyhow::Result<Self>;
}
impl UnixListenerExt for UnixListener {
fn claim_fd(fd: RawFd) -> anyhow::Result<Self> {
use std::os::unix::net::UnixListener as StdUnixListener;
let sock = StdUnixListener::from(claim_fd(fd)?);
sock.set_nonblocking(true)?;
Ok(UnixListener::from_std(sock))
}
}
pub trait UnixStreamExt: Sized {
fn claim_fd(fd: RawFd) -> anyhow::Result<Self>;
}
impl UnixStreamExt for UnixStream {
fn claim_fd(fd: RawFd) -> anyhow::Result<Self> {
use std::os::unix::net::UnixStream as StdUnixStream;
let sock = StdUnixStream::from(claim_fd(fd)?);
sock.set_nonblocking(true)?;
Ok(UnixStream::from_std(sock))
}
}

68
util/src/mio/mio.rs Normal file
View File

@@ -0,0 +1,68 @@
use mio::net::{UnixListener, UnixStream};
use std::os::fd::{OwnedFd, RawFd};
use crate::{
fd::{claim_fd, claim_fd_inplace},
result::OkExt,
};
/// Module containing I/O interest flags for Unix operations
pub mod interest {
use mio::Interest;
/// Interest flag indicating readability
pub const R: Interest = Interest::READABLE;
/// Interest flag indicating writability
pub const W: Interest = Interest::WRITABLE;
/// Interest flag indicating both readability and writability
pub const RW: Interest = R.add(W);
}
/// Extension trait providing additional functionality for Unix listener
pub trait UnixListenerExt: Sized {
/// Creates a new Unix listener by claiming ownership of a raw file descriptor
fn claim_fd(fd: RawFd) -> anyhow::Result<Self>;
}
impl UnixListenerExt for UnixListener {
fn claim_fd(fd: RawFd) -> anyhow::Result<Self> {
use std::os::unix::net::UnixListener as StdUnixListener;
let sock = StdUnixListener::from(claim_fd(fd)?);
sock.set_nonblocking(true)?;
Ok(UnixListener::from_std(sock))
}
}
/// Extension trait providing additional functionality for Unix streams
pub trait UnixStreamExt: Sized {
/// Creates a new Unix stream from an owned file descriptor
fn from_fd(fd: OwnedFd) -> anyhow::Result<Self>;
/// Claims ownership of a raw file descriptor and creates a new Unix stream
fn claim_fd(fd: RawFd) -> anyhow::Result<Self>;
/// Claims ownership of a raw file descriptor in place and creates a new Unix stream
fn claim_fd_inplace(fd: RawFd) -> anyhow::Result<Self>;
}
impl UnixStreamExt for UnixStream {
fn from_fd(fd: OwnedFd) -> anyhow::Result<Self> {
use std::os::unix::net::UnixStream as StdUnixStream;
#[cfg(target_os = "linux")] // TODO: We should support this on other plattforms
crate::fd::GetUnixSocketType::demand_unix_stream_socket(&fd)?;
let sock = StdUnixStream::from(fd);
sock.set_nonblocking(true)?;
UnixStream::from_std(sock).ok()
}
fn claim_fd(fd: RawFd) -> anyhow::Result<Self> {
Self::from_fd(claim_fd(fd)?)
}
fn claim_fd_inplace(fd: RawFd) -> anyhow::Result<Self> {
Self::from_fd(claim_fd_inplace(fd)?)
}
}

13
util/src/mio/mod.rs Normal file
View File

@@ -0,0 +1,13 @@
#[allow(clippy::module_inception)]
mod mio;
pub use mio::*;
#[cfg(feature = "experiment_file_descriptor_passing")]
mod uds_send_fd;
#[cfg(feature = "experiment_file_descriptor_passing")]
pub use uds_send_fd::*;
#[cfg(feature = "experiment_file_descriptor_passing")]
mod uds_recv_fd;
#[cfg(feature = "experiment_file_descriptor_passing")]
pub use uds_recv_fd::*;

134
util/src/mio/uds_recv_fd.rs Normal file
View File

@@ -0,0 +1,134 @@
use std::{
borrow::{Borrow, BorrowMut},
collections::VecDeque,
io::Read,
marker::PhantomData,
os::fd::{FromRawFd, OwnedFd},
};
use uds::UnixStreamExt as FdPassingExt;
use crate::fd::{claim_fd_inplace, IntoStdioErr};
/// A wrapper around a socket that combines reading from the socket with tracking
/// received file descriptors. Limits the maximum number of file descriptors that
/// can be received in a single read operation via the `MAX_FDS` parameter.
pub struct ReadWithFileDescriptors<const MAX_FDS: usize, Sock, BorrowSock, BorrowFds>
where
Sock: FdPassingExt,
BorrowSock: Borrow<Sock>,
BorrowFds: BorrowMut<VecDeque<OwnedFd>>,
{
socket: BorrowSock,
fds: BorrowFds,
_sock_dummy: PhantomData<Sock>,
}
impl<const MAX_FDS: usize, Sock, BorrowSock, BorrowFds>
ReadWithFileDescriptors<MAX_FDS, Sock, BorrowSock, BorrowFds>
where
Sock: FdPassingExt,
BorrowSock: Borrow<Sock>,
BorrowFds: BorrowMut<VecDeque<OwnedFd>>,
{
/// Creates a new `ReadWithFileDescriptors` by wrapping a socket and a file
/// descriptor queue.
pub fn new(socket: BorrowSock, fds: BorrowFds) -> Self {
let _sock_dummy = PhantomData;
Self {
socket,
fds,
_sock_dummy,
}
}
/// Consumes the wrapper and returns the underlying socket and file
/// descriptor queue.
pub fn into_parts(self) -> (BorrowSock, BorrowFds) {
let Self { socket, fds, .. } = self;
(socket, fds)
}
/// Returns a reference to the underlying socket.
pub fn socket(&self) -> &Sock {
self.socket.borrow()
}
/// Returns a reference to the file descriptor queue.
pub fn fds(&self) -> &VecDeque<OwnedFd> {
self.fds.borrow()
}
/// Returns a mutable reference to the file descriptor queue.
pub fn fds_mut(&mut self) -> &mut VecDeque<OwnedFd> {
self.fds.borrow_mut()
}
}
impl<const MAX_FDS: usize, Sock, BorrowSock, BorrowFds>
ReadWithFileDescriptors<MAX_FDS, Sock, BorrowSock, BorrowFds>
where
Sock: FdPassingExt,
BorrowSock: BorrowMut<Sock>,
BorrowFds: BorrowMut<VecDeque<OwnedFd>>,
{
/// Returns a mutable reference to the underlying socket.
pub fn socket_mut(&mut self) -> &mut Sock {
self.socket.borrow_mut()
}
}
impl<const MAX_FDS: usize, Sock, BorrowSock, BorrowFds> Read
for ReadWithFileDescriptors<MAX_FDS, Sock, BorrowSock, BorrowFds>
where
Sock: FdPassingExt,
BorrowSock: Borrow<Sock>,
BorrowFds: BorrowMut<VecDeque<OwnedFd>>,
{
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
// Calculate space for additional file descriptors
let have_fds_before_read = self.fds().len();
let free_fd_slots = MAX_FDS.saturating_sub(have_fds_before_read);
// Allocate a buffer for file descriptors
let mut fd_buf = [0; MAX_FDS];
let fd_buf = &mut fd_buf[..free_fd_slots];
// Read from the unix socket
let (bytes_read, fds_read) = self.socket.borrow().recv_fds(buf, fd_buf)?;
let fd_buf = &fd_buf[..fds_read];
// Process the file descriptors
let mut fd_iter = fd_buf.iter();
// Try claiming all the file descriptors
let mut claim_fd_result = Ok(bytes_read);
self.fds_mut().reserve(fd_buf.len());
for fd in fd_iter.by_ref() {
match claim_fd_inplace(*fd) {
Ok(owned) => self.fds_mut().push_back(owned),
Err(e) => {
// Abort on error and pass to error handler
// Note that claim_fd_inplace is responsible for closing this particular
// file descriptor if claiming it fails
claim_fd_result = Err(e.into_stdio_err());
break;
}
}
}
// Return if we where able to claim all file descriptors
if claim_fd_result.is_ok() {
return claim_fd_result;
};
// An error occurred while claiming fds
self.fds_mut().truncate(have_fds_before_read); // Close fds successfully claimed
// Close the remaining fds
for fd in fd_iter {
unsafe { drop(OwnedFd::from_raw_fd(*fd)) };
}
claim_fd_result
}
}

128
util/src/mio/uds_send_fd.rs Normal file
View File

@@ -0,0 +1,128 @@
use std::os::fd::{AsFd, AsRawFd};
use std::{
borrow::{Borrow, BorrowMut},
cmp::min,
collections::VecDeque,
io::Write,
marker::PhantomData,
};
use uds::UnixStreamExt as FdPassingExt;
use crate::{repeat, return_if};
/// A structure that facilitates writing data and file descriptors to a Unix domain socket
pub struct WriteWithFileDescriptors<Sock, Fd, BorrowSock, BorrowFds>
where
Sock: FdPassingExt,
Fd: AsFd,
BorrowSock: Borrow<Sock>,
BorrowFds: BorrowMut<VecDeque<Fd>>,
{
socket: BorrowSock,
fds: BorrowFds,
_sock_dummy: PhantomData<Sock>,
_fd_dummy: PhantomData<Fd>,
}
impl<Sock, Fd, BorrowSock, BorrowFds> WriteWithFileDescriptors<Sock, Fd, BorrowSock, BorrowFds>
where
Sock: FdPassingExt,
Fd: AsFd,
BorrowSock: Borrow<Sock>,
BorrowFds: BorrowMut<VecDeque<Fd>>,
{
/// Creates a new `WriteWithFileDescriptors` instance with the given socket and file descriptor queue
pub fn new(socket: BorrowSock, fds: BorrowFds) -> Self {
let _sock_dummy = PhantomData;
let _fd_dummy = PhantomData;
Self {
socket,
fds,
_sock_dummy,
_fd_dummy,
}
}
/// Consumes this instance and returns the underlying socket and file descriptor queue
pub fn into_parts(self) -> (BorrowSock, BorrowFds) {
let Self { socket, fds, .. } = self;
(socket, fds)
}
/// Returns a reference to the underlying socket
pub fn socket(&self) -> &Sock {
self.socket.borrow()
}
/// Returns a reference to the file descriptor queue
pub fn fds(&self) -> &VecDeque<Fd> {
self.fds.borrow()
}
/// Returns a mutable reference to the file descriptor queue
pub fn fds_mut(&mut self) -> &mut VecDeque<Fd> {
self.fds.borrow_mut()
}
}
impl<Sock, Fd, BorrowSock, BorrowFds> WriteWithFileDescriptors<Sock, Fd, BorrowSock, BorrowFds>
where
Sock: FdPassingExt,
Fd: AsFd,
BorrowSock: BorrowMut<Sock>,
BorrowFds: BorrowMut<VecDeque<Fd>>,
{
/// Returns a mutable reference to the underlying socket
pub fn socket_mut(&mut self) -> &mut Sock {
self.socket.borrow_mut()
}
}
impl<Sock, Fd, BorrowSock, BorrowFds> Write
for WriteWithFileDescriptors<Sock, Fd, BorrowSock, BorrowFds>
where
Sock: FdPassingExt,
Fd: AsFd,
BorrowSock: Borrow<Sock>,
BorrowFds: BorrowMut<VecDeque<Fd>>,
{
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
// At least one byte of real data should be sent when sending ancillary data. -- unix(7)
return_if!(buf.is_empty(), Ok(0));
// The kernel constant SCM_MAX_FD defines a limit on the number of file descriptors
// in the array. Attempting to send an array larger than this limit causes
// sendmsg(2) to fail with the error EINVAL. SCM_MAX_FD has the value 253 (or 255
// before Linux 2.6.38).
// -- unix(7)
const SCM_MAX_FD: usize = 253;
let buf = match self.fds().len() <= SCM_MAX_FD {
false => &buf[..1], // Force caller to immediately call write() again to send its data
true => buf,
};
// Allocate the buffer for the file descriptor array
let fd_no = min(SCM_MAX_FD, self.fds().len());
let mut fd_buf = [0; SCM_MAX_FD]; // My kingdom for alloca(3)
let fd_buf = &mut fd_buf[..fd_no];
// Fill the file descriptor array
for (raw, fancy) in fd_buf.iter_mut().zip(self.fds().iter()) {
*raw = fancy.as_fd().as_raw_fd();
}
// Send data and file descriptors
let bytes_written = self.socket().send_fds(buf, fd_buf)?;
// Drop the file descriptors from the Deque
repeat!(fd_no, {
self.fds_mut().pop_front();
});
Ok(bytes_written)
}
fn flush(&mut self) -> std::io::Result<()> {
Ok(())
}
}

20
util/src/option.rs Normal file
View File

@@ -0,0 +1,20 @@
/// A helper trait for turning any type value into `Some(value)`.
///
/// # Examples
///
/// ```
/// use rosenpass_util::option::SomeExt;
///
/// let x = 42;
/// let y = x.some();
///
/// assert_eq!(y, Some(42));
/// ```
pub trait SomeExt: Sized {
/// Wraps the calling value in `Some()`.
fn some(self) -> Option<Self> {
Some(self)
}
}
impl<T> SomeExt for T {}

View File

@@ -1,8 +0,0 @@
// TODO remove this once std::cmp::max becomes const
pub const fn max_usize(a: usize, b: usize) -> usize {
if a > b {
a
} else {
b
}
}

View File

@@ -8,6 +8,18 @@ macro_rules! attempt {
};
}
/// Trait for the ok operation, which provides a way to convert a value into a Result
pub trait OkExt<E>: Sized {
/// Wraps a value in a Result::Ok variant
fn ok(self) -> Result<Self, E>;
}
impl<T, E> OkExt<E> for T {
fn ok(self) -> Result<Self, E> {
Ok(self)
}
}
/// Trait for container types that guarantee successful unwrapping.
///
/// The `.guaranteed()` function can be used over unwrap to show that
@@ -15,6 +27,7 @@ macro_rules! attempt {
///
/// Implementations must not panic.
pub trait GuaranteedValue {
/// The value type that will be returned by guaranteed()
type Value;
/// Extract the contained value while being panic-safe, like .unwrap()
@@ -25,6 +38,28 @@ pub trait GuaranteedValue {
fn guaranteed(self) -> Self::Value;
}
/// Extension trait for adding finally operation to types
pub trait FinallyExt {
/// Executes a closure with mutable access to self and returns self
///
/// The closure is guaranteed to be executed before returning.
fn finally<F: FnOnce(&mut Self)>(self, f: F) -> Self;
}
impl<T, E> FinallyExt for Result<T, E> {
fn finally<F: FnOnce(&mut Self)>(mut self, f: F) -> Self {
f(&mut self);
self
}
}
impl<T> FinallyExt for Option<T> {
fn finally<F: FnOnce(&mut Self)>(mut self, f: F) -> Self {
f(&mut self);
self
}
}
/// A result type that never contains an error.
///
/// This is mostly useful in generic contexts.
@@ -97,6 +132,18 @@ impl<T> GuaranteedValue for Guaranteed<T> {
}
}
/// Checks a condition is true and returns an error if not.
///
/// # Examples
///
/// ```rust
/// # use rosenpass_util::result::ensure_or;
/// let result = ensure_or(5 > 3, "not greater");
/// assert!(result.is_ok());
///
/// let result = ensure_or(5 < 3, "not less");
/// assert!(result.is_err());
/// ```
pub fn ensure_or<E>(b: bool, err: E) -> Result<(), E> {
match b {
true => Ok(()),
@@ -104,6 +151,18 @@ pub fn ensure_or<E>(b: bool, err: E) -> Result<(), E> {
}
}
/// Evaluates to an error if the condition is true.
///
/// # Examples
///
/// ```rust
/// # use rosenpass_util::result::bail_if;
/// let result = bail_if(false, "not bailed");
/// assert!(result.is_ok());
///
/// let result = bail_if(true, "bailed");
/// assert!(result.is_err());
/// ```
pub fn bail_if<E>(b: bool, err: E) -> Result<(), E> {
ensure_or(!b, err)
}

View File

@@ -1,20 +1,63 @@
use std::time::{Duration, Instant};
use std::time::Instant;
/// A timebase.
///
/// This is a simple wrapper around `std::time::Instant` that provides a
/// convenient way to get the seconds elapsed since the creation of the
/// `Timebase` instance.
///
/// # Examples
///
/// ```
/// use rosenpass_util::time::Timebase;
///
/// let timebase = Timebase::default();
/// let now = timebase.now();
/// assert!(now > 0.0);
/// ```
#[derive(Clone, Debug)]
pub struct Timebase(Instant);
impl Default for Timebase {
// TODO: Implement new()?
fn default() -> Self {
Self(Instant::now())
}
}
impl Timebase {
/// Returns the seconds elapsed since the creation of the `Timebase`
pub fn now(&self) -> f64 {
self.0.elapsed().as_secs_f64()
}
}
pub fn dur(&self, t: f64) -> Duration {
Duration::from_secs_f64(t)
#[cfg(test)]
mod tests {
use super::*;
use std::thread::sleep;
use std::time::Duration;
#[test]
fn test_timebase() {
let timebase = Timebase::default();
let now = timebase.now();
assert!(now > 0.0);
}
#[test]
fn test_timebase_clone() {
let timebase = Timebase::default();
let timebase_clone = timebase.clone();
assert_eq!(timebase.0, timebase_clone.0);
}
#[test]
fn test_timebase_sleep() {
let timebase = Timebase::default();
sleep(Duration::from_secs(1));
let now = timebase.now();
assert!(now > 1.0);
}
}

View File

@@ -16,6 +16,7 @@ macro_rules! typenum2const {
/// Trait implemented by constant integers to facilitate conversion to constant integers
pub trait IntoConst<T> {
/// The constant value after conversion
const VALUE: T;
}

View File

@@ -7,56 +7,68 @@ use zeroize::Zeroize;
use crate::zeroize::ZeroizedExt;
#[derive(Clone, Copy, Debug)]
/// A convenience type for working with mutable references to a buffer and an
/// expected target type.
pub struct RefMaker<B: Sized, T> {
buf: B,
_phantom_t: PhantomData<T>,
}
impl<B, T> RefMaker<B, T> {
/// Creates a new RefMaker with the given buffer
pub fn new(buf: B) -> Self {
let _phantom_t = PhantomData;
Self { buf, _phantom_t }
}
/// Returns the size in bytes needed for target type T
pub const fn target_size() -> usize {
std::mem::size_of::<T>()
}
/// Consumes this RefMaker and returns the inner buffer
pub fn into_buf(self) -> B {
self.buf
}
/// Returns a reference to the inner buffer
pub fn buf(&self) -> &B {
&self.buf
}
/// Returns a mutable reference to the inner buffer
pub fn buf_mut(&mut self) -> &mut B {
&mut self.buf
}
}
impl<B: ByteSlice, T> RefMaker<B, T> {
/// Parses the buffer into a reference of type T
pub fn parse(self) -> anyhow::Result<Ref<B, T>> {
self.ensure_fit()?;
Ref::<B, T>::new(self.buf).context("Parser error!")
}
/// Splits the buffer into a RefMaker containing the first `target_size` bytes and the remaining tail
pub fn from_prefix_with_tail(self) -> anyhow::Result<(Self, B)> {
self.ensure_fit()?;
let (head, tail) = self.buf.split_at(Self::target_size());
Ok((Self::new(head), tail))
}
/// Splits the buffer into two RefMakers, with the first containing the first `target_size` bytes
pub fn split_prefix(self) -> anyhow::Result<(Self, Self)> {
self.ensure_fit()?;
let (head, tail) = self.buf.split_at(Self::target_size());
Ok((Self::new(head), Self::new(tail)))
}
/// Returns a RefMaker containing only the first `target_size` bytes
pub fn from_prefix(self) -> anyhow::Result<Self> {
Ok(Self::from_prefix_with_tail(self)?.0)
}
/// Splits the buffer into a RefMaker containing the last `target_size` bytes and the preceding head
pub fn from_suffix_with_head(self) -> anyhow::Result<(Self, B)> {
self.ensure_fit()?;
let point = self.bytes().len() - Self::target_size();
@@ -64,6 +76,7 @@ impl<B: ByteSlice, T> RefMaker<B, T> {
Ok((Self::new(tail), head))
}
/// Splits the buffer into two RefMakers, with the second containing the last `target_size` bytes
pub fn split_suffix(self) -> anyhow::Result<(Self, Self)> {
self.ensure_fit()?;
let point = self.bytes().len() - Self::target_size();
@@ -71,14 +84,17 @@ impl<B: ByteSlice, T> RefMaker<B, T> {
Ok((Self::new(head), Self::new(tail)))
}
/// Returns a RefMaker containing only the last `target_size` bytes
pub fn from_suffix(self) -> anyhow::Result<Self> {
Ok(Self::from_suffix_with_head(self)?.0)
}
/// Returns a reference to the underlying bytes
pub fn bytes(&self) -> &[u8] {
self.buf().deref()
}
/// Ensures the buffer is large enough to hold type T
pub fn ensure_fit(&self) -> anyhow::Result<()> {
let have = self.bytes().len();
let need = Self::target_size();
@@ -91,10 +107,12 @@ impl<B: ByteSlice, T> RefMaker<B, T> {
}
impl<B: ByteSliceMut, T> RefMaker<B, T> {
/// Creates a zeroed reference of type T from the buffer
pub fn make_zeroized(self) -> anyhow::Result<Ref<B, T>> {
self.zeroized().parse()
}
/// Returns a mutable reference to the underlying bytes
pub fn bytes_mut(&mut self) -> &mut [u8] {
self.buf_mut().deref_mut()
}

View File

@@ -1,10 +1,14 @@
use zerocopy::{ByteSlice, ByteSliceMut, Ref};
/// A trait for converting a `Ref<B, T>` into a `Ref<&[u8], T>`.
pub trait ZerocopyEmancipateExt<B, T> {
/// Converts this reference into a reference backed by a byte slice.
fn emancipate(&self) -> Ref<&[u8], T>;
}
/// A trait for converting a `Ref<B, T>` into a mutable `Ref<&mut [u8], T>`.
pub trait ZerocopyEmancipateMutExt<B, T> {
/// Converts this reference into a mutable reference backed by a byte slice.
fn emancipate_mut(&mut self) -> Ref<&mut [u8], T>;
}

Some files were not shown because too many files have changed in this diff Show More