Compare commits

...

159 Commits

Author SHA1 Message Date
Prabhpreet Dua
c0c57e5ffc Merge branch 'main' into feat/improved-memfd-allocation 2024-06-12 17:36:28 +05:30
Prabhpreet Dua
32d30a6f63 feat: Improved memfd-secret allocation 2024-06-12 17:32:11 +05:30
Prabhpreet Dua
f535a31cd7 Feature flag for memfd_secret alloc (#343)
* feature flag for memfd_secret alloc

* Cargo fmt
2024-06-11 14:53:30 +05:30
Karolin Varner
ac2aaa5fbd Merge pull request #336 from rosenpass/dev/karo/rollback-proofs
chore: Rollback symbolic models to original state
2024-06-11 09:57:36 +02:00
dependabot[bot]
e472fa1fcd build(deps): bump clap from 4.5.6 to 4.5.7 (#340)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.6 to 4.5.7.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/v4.5.6...v4.5.7)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-11 08:26:40 +05:30
Prabhpreet Dua
526c930119 Secret memory with memfd_secret (#321)
Implements:
- An additional allocator to use memfd_secret(2) and guard pages using mmap(2), implemented in quininer/memsec#16
- An allocator that abstracts away underlying allocators, and uses specified allocator set by rosenpass_secret_memory::policy functions (or a function that sets rosenpass_secret_memory::alloc::ALLOC_INIT
- Updates to tests- integration, fuzz, bench: some tests use procspawn to spawn multiple processes with different allocator policies
2024-06-10 13:12:44 +05:30
Karolin Varner
5f8b00d045 chore: Rollback symbolic models to original state
The later edits where unfortunately incomplete. They lacked
modeling of multi-session, multi-user settings and they generally
rendered the models less trustworthy from my perspective.

These edits are still interesting as a starting point for analyzing
identity hiding and stealth, but they are not high-quality enough to be
present in main.
2024-06-07 20:05:23 +02:00
dependabot[bot]
b46fca99cb build(deps): bump clap from 4.5.4 to 4.5.6 (#335)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.4 to 4.5.6.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.5.4...v4.5.6)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-07 10:46:02 +05:30
Prabhpreet Dua
70c5ec2c29 chore: Remove libsodium references in nix flake, ci (#334) 2024-06-06 17:10:51 +05:30
Prabhpreet Dua
0e059af5da fix(rosenpass): Fix duplicate key issue (#329)
Change handle_init_conf to return to instruct key exchange on encountering new biscuit_no for peer
2024-06-04 22:47:54 +05:30
Paul Spooren
99754f326e Warn only if neither peer nor outfile is defined
Right now a warning message is logged if no Wireguard peer is defined.
This is misleading in cases where the outfile is used instead.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-06-03 17:58:50 +02:00
dependabot[bot]
fd397b9ea0 build(deps): bump tokio from 1.37.0 to 1.38.0 (#324)
Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.37.0 to 1.38.0.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.37.0...tokio-1.38.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-31 08:30:41 +05:30
dependabot[bot]
e92fa552e3 build(deps): bump zeroize from 1.7.0 to 1.8.1 (#322)
Bumps [zeroize](https://github.com/RustCrypto/utils) from 1.7.0 to 1.8.1.
- [Commits](https://github.com/RustCrypto/utils/compare/zeroize-v1.7.0...zeroize-v1.8.1)

---
updated-dependencies:
- dependency-name: zeroize
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Prabhpreet Dua <615318+prabhpreet@users.noreply.github.com>
2024-05-28 14:23:45 +05:30
dependabot[bot]
c438d5a99d build(deps): bump serde from 1.0.202 to 1.0.203 (#323)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.202 to 1.0.203.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.202...v1.0.203)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-28 11:07:24 +05:30
dependabot[bot]
d4eef998f5 --- (#318)
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-21 07:12:19 +05:30
Prabhpreet Dua
c1abfbfd14 feat(rosenpass): Add wireguard-broker interface in AppServer (#303)
Dynamically dispatch WireguardBrokerMio trait in AppServer. Also allows for mio event registration and poll processing, logic from dev/broker-architecture branch

Co-authored-by: Prabhpreet Dua <615318+prabhpreet@users.noreply.github.com>
Co-authored-by: Karolin Varner <karo@cupdev.net>
2024-05-20 18:12:42 +05:30
dependabot[bot]
ae7577c641 build(deps): bump thiserror from 1.0.60 to 1.0.61 (#316)
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.60 to 1.0.61.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.60...1.0.61)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-18 12:39:10 +02:00
dependabot[bot]
f07f598e44 build(deps): bump anyhow from 1.0.83 to 1.0.85 (#317)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.83 to 1.0.85.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.83...1.0.85)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-18 12:38:46 +02:00
Alice Michaela Bowman
988f66cf2b Merge pull request #314 from prabhpreet/fix/dep-workflow-permissions
chore: Add write permissions in dependent-issues workflow
2024-05-17 11:48:20 +02:00
Prabhpreet Dua
06969c406d chore: Add write permissions in dependent-issues workflow 2024-05-17 14:56:29 +05:30
dependabot[bot]
b5215aecba build(deps): bump serde from 1.0.201 to 1.0.202 (#312)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.201 to 1.0.202.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.201...v1.0.202)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-16 14:18:26 +05:30
Alice Michaela Bowman
3e32bbad7c Merge pull request #310 from rosenpass/ci/dependent-issues
Dependent issues Workflow
2024-05-10 16:42:34 +02:00
Prabhpreet Dua
650110a04f Run prettier (#311) 2024-05-10 19:55:29 +05:30
Prabhpreet Dua
ee669823de Create dependent-issues.yml 2024-05-10 19:47:10 +05:30
dependabot[bot]
40940ca1df build(deps): bump serde from 1.0.200 to 1.0.201 (#307)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.200 to 1.0.201.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.200...v1.0.201)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-09 07:41:30 +05:30
dependabot[bot]
b77eccffc0 build(deps): bump paste from 1.0.14 to 1.0.15 (#304)
Bumps [paste](https://github.com/dtolnay/paste) from 1.0.14 to 1.0.15.
- [Release notes](https://github.com/dtolnay/paste/releases)
- [Commits](https://github.com/dtolnay/paste/compare/1.0.14...1.0.15)

---
updated-dependencies:
- dependency-name: paste
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-08 11:02:31 +05:30
dependabot[bot]
e17d8cd559 build(deps): bump thiserror from 1.0.59 to 1.0.60 (#305)
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.59 to 1.0.60.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.59...1.0.60)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Prabhpreet Dua <615318+prabhpreet@users.noreply.github.com>
2024-05-08 10:24:19 +05:30
dependabot[bot]
c72e8bcda1 build(deps): bump zerocopy from 0.7.33 to 0.7.34 (#306)
Bumps [zerocopy](https://github.com/google/zerocopy) from 0.7.33 to 0.7.34.
- [Release notes](https://github.com/google/zerocopy/releases)
- [Changelog](https://github.com/google/zerocopy/blob/main/CHANGELOG.md)
- [Commits](https://github.com/google/zerocopy/commits)

---
updated-dependencies:
- dependency-name: zerocopy
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-08 10:06:29 +05:30
Prabhpreet Dua
2bac991305 feat(wireguard-broker): merge from dev/broker-architecture, fixes, test
* wireguard-broker: merge from dev/broker-architecture
* use zerocopy instead of lenses
* Require use_broker feature flag to comile broker binaries
* Remove PhantomData from BrokerServer & BrokerClient
* Modify mio client rx to be non-recursive, add integration test

Co-authored-by: Karolin Varner <karo@cupdev.net>
Co-authored-by: Prabhpreet Dua <615318+prabhpreet@users.noreply.github.com>
2024-05-07 12:23:35 +05:30
dependabot[bot]
e6d114c557 build(deps): bump zerocopy from 0.7.32 to 0.7.33 (#301)
Bumps [zerocopy](https://github.com/google/zerocopy) from 0.7.32 to 0.7.33.
- [Release notes](https://github.com/google/zerocopy/releases)
- [Changelog](https://github.com/google/zerocopy/blob/main/CHANGELOG.md)
- [Commits](https://github.com/google/zerocopy/compare/v0.7.32...v0.7.33)

---
updated-dependencies:
- dependency-name: zerocopy
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-07 08:35:36 +05:30
dependabot[bot]
29efbba97a build(deps): bump anyhow from 1.0.82 to 1.0.83 (#302)
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.82 to 1.0.83.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.82...1.0.83)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-07 08:34:59 +05:30
Karolin Varner
3fb1220262 Merge pull request #300 from prabhpreet/codecov-yml
chore: Add codecov configuration file
2024-05-06 17:59:27 +02:00
Prabhpreet Dua
1ccf92c538 Merge branch 'main' into codecov-yml 2024-05-06 21:14:29 +05:30
Prabhpreet Dua
4bb3153761 feat(deps): Change base64 to base64ct crate (#295) 2024-05-06 21:14:10 +05:30
Prabhpreet Dua
a8ed0e8c66 chore: Update codecov configuration file 2024-05-06 20:59:08 +05:30
Prabhpreet Dua
ad6405f865 chore: Add codecov configuration file 2024-05-06 20:53:55 +05:30
Prabhpreet Dua
761d5730af ci: Changes from #160- Invoke the mandoc linter (#296)
* ci: Changes from #160- Invoke the mandoc linter

* Add check.sh from #160 too

* Fix mandoc
2024-05-04 22:47:11 +02:00
Prabhpreet Dua
b45b7bc7f5 Update liboqs 0.9.1 (#292)
* deps,fuzz: update to liboqs 0.9.1

The release updates the Classic McEliece to NIST PQC Round 4 version
Updates breaking fuzz tests as well

Signed-off-by:
Paul Spooren <mail@aparcar.org>
Prabhpreet Dua <615318+prabhpreet@users.noreply.github.com>

* Update secret key length for McEliece KEM update

* Update to specifying key lengths of Kyber and McEliece through constants

---------

Co-authored-by: Paul Spooren <mail@aparcar.org>
2024-05-02 18:47:26 +05:30
dependabot[bot]
77a985dc02 build(deps): bump serde from 1.0.199 to 1.0.200 (#299)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.199 to 1.0.200.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.199...v1.0.200)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-02 13:10:47 +05:30
Prabhpreet Dua
21e693a9da ci: Add codecov (llvm-cov) coverage (#297)
* ci: Add codecov (llvm-cov) coverage

* Run prettier on qc.yaml
2024-05-01 18:31:46 +05:30
Emil Engler
be91b3049c rp: Load WireGuard SK into secret memory (#293)
Fixes #287
2024-04-30 18:10:04 +02:00
dependabot[bot]
4dc24f745c build(deps): bump serial_test from 3.1.0 to 3.1.1 (#290)
Bumps [serial_test](https://github.com/palfrey/serial_test) from 3.1.0 to 3.1.1.
- [Release notes](https://github.com/palfrey/serial_test/releases)
- [Commits](https://github.com/palfrey/serial_test/compare/v3.1.0...v3.1.1)

---
updated-dependencies:
- dependency-name: serial_test
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 14:51:24 +05:30
dependabot[bot]
61a1cc3825 build(deps): bump serde from 1.0.198 to 1.0.199 (#291)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.198 to 1.0.199.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.198...v1.0.199)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-30 13:22:37 +05:30
Prabhpreet Dua
2e01d1df46 Revert "build(deps): bump zeroize from 1.7.0 to 1.8.0 (#285)" (#289)
Reverts #285 since 1.8.0 has been yanked

Refer RustCrypto/utils#1067
2024-04-29 14:06:34 +05:30
Prabhpreet Dua
2c6411a2b1 Merge pull request #284 from rosenpass/dependabot/cargo/serial_test-3.1.0
build(deps): bump serial_test from 3.0.0 to 3.1.0
2024-04-29 12:47:04 +05:30
Karolin Varner
96b12ac261 Clippy Fixes 2024-04-25 20:32:46 +02:00
Emil Engler
3e734e0d57 rosenpass: Replace Into<> with From<> trait 2024-04-25 11:16:52 +02:00
Emil Engler
c9e296794b rosenpass: Remove useless conversion 2024-04-25 11:15:51 +02:00
Emil Engler
bc6bff499d rosenpass: Remove redundant Ok() 2024-04-25 11:14:59 +02:00
Emil Engler
de905056fc rp: Remove needless borrow 2024-04-25 11:13:32 +02:00
Emil Engler
4e8344660e rosenpass: Remove needless borrow 2024-04-25 11:11:56 +02:00
Emil Engler
a581f7dfa7 rosenpass: Replace if let with is_ok() call 2024-04-25 11:10:21 +02:00
Emil Engler
bd6a6e5dce ciphers: Remove needless borrow for nonce array 2024-04-25 11:08:54 +02:00
Emil Engler
e0496c12c6 rosenpass: Use copy instead of clone trait 2024-04-25 11:05:16 +02:00
Emil Engler
f4116f2c20 ciphers: Remove redundant mutability 2024-04-25 11:03:48 +02:00
Emil Engler
8099bc4bdd constant-time: Remove redundant cast 2024-04-25 11:01:41 +02:00
Emil Engler
39d174c605 util: Suppress clippy warnings for neutral element 2024-04-25 11:01:09 +02:00
dependabot[bot]
0257aa9e15 build(deps): bump zeroize from 1.7.0 to 1.8.0 (#285)
Bumps [zeroize](https://github.com/RustCrypto/utils) from 1.7.0 to 1.8.0.
- [Commits](https://github.com/RustCrypto/utils/compare/zeroize-v1.7.0...zeroize-v1.8.0)

---
updated-dependencies:
- dependency-name: zeroize
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-25 10:54:55 +02:00
Emil Engler
3299b2bdb4 Merge branch 'main' into dependabot/cargo/serial_test-3.1.0 2024-04-23 11:15:57 +02:00
dependabot[bot]
f43b018511 build(deps): bump thiserror from 1.0.58 to 1.0.59 (#283)
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.58 to 1.0.59.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.58...1.0.59)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 11:15:28 +02:00
dependabot[bot]
0f884b79fa build(deps): bump serial_test from 3.0.0 to 3.1.0
Bumps [serial_test](https://github.com/palfrey/serial_test) from 3.0.0 to 3.1.0.
- [Release notes](https://github.com/palfrey/serial_test/releases)
- [Commits](https://github.com/palfrey/serial_test/compare/v3.0.0...v3.1.0)

---
updated-dependencies:
- dependency-name: serial_test
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-22 23:31:58 +00:00
dependabot[bot]
ab83d3fae6 build(deps): bump tempfile from 3.9.0 to 3.10.1 (#282)
Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.9.0 to 3.10.1.
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Stebalien/tempfile/compare/v3.9.0...v3.10.1)

---
updated-dependencies:
- dependency-name: tempfile
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-20 17:46:35 +02:00
Gergő Móricz
cc7e8dc510 feat(rp-rust): implement rp tool in Rust (#235) 2024-04-19 20:44:55 +02:00
Gergő Móricz
c2d0d34c57 Add .devcontainer configuration (#267)
chore: Devcontainer config
2024-04-19 14:50:46 +02:00
Alice Michaela Bowman
5d46c93b2b Merge pull request #142 from prabhpreet/feat/cookie-mechanism
Add cookie mechanism
2024-04-17 13:48:14 +02:00
Prabhpreet Dua
e6d7a7232f Cargo lock update 2024-04-16 17:54:03 +05:30
Prabhpreet Dua
6ba1be6eae Merge branch 'main' into feat/cookie-mechanism 2024-04-16 17:41:41 +05:30
dependabot[bot]
c194c74e55 build(deps): bump clap from 4.4.10 to 4.5.4
Bumps [clap](https://github.com/clap-rs/clap) from 4.4.10 to 4.5.4.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/clap_complete-v4.4.10...v4.5.4)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-16 12:19:01 +02:00
dependabot[bot]
96de84e68f build(deps): bump allocator-api2-tests from 0.2.14 to 0.2.15
Bumps [allocator-api2-tests](https://github.com/zakarumych/allocator-api2) from 0.2.14 to 0.2.15.
- [Changelog](https://github.com/zakarumych/allocator-api2/blob/main/CHANGELOG.md)
- [Commits](https://github.com/zakarumych/allocator-api2/compare/v0.2.14...v0.2.15)

---
updated-dependencies:
- dependency-name: allocator-api2-tests
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-16 12:18:50 +02:00
Prabhpreet Dua
6215bc1514 Update whitepaper 2024-04-16 15:24:08 +05:30
Prabhpreet Dua
b0a93d6884 Whitepaper and cleanup 2024-04-16 15:07:01 +05:30
Prabhpreet Dua
bba0c874f2 Localhost ::1 2024-04-16 13:21:55 +05:30
Prabhpreet Dua
a32efb61d1 Skip cookie validation with InitConf, use 0.0.0.0 for loopback 2024-04-16 12:57:01 +05:30
Prabhpreet Dua
10bdb5f371 Display error if send to socket fails 2024-04-16 12:41:40 +05:30
Prabhpreet Dua
b07859f6ec Change under load params, change localhost to IP 2024-04-16 12:16:30 +05:30
Prabhpreet Dua
65df24a98b Consolidate tests with debugging 2024-04-16 11:41:43 +05:30
Prabhpreet Dua
9396784c0f Cargo fmt 2024-04-16 11:18:47 +05:30
Prabhpreet Dua
8420d953eb Add HostIdentification trait, add logging 2024-04-16 11:17:03 +05:30
Prabhpreet Dua
e7de4848fb Try threads instead of process 2024-04-16 09:22:08 +05:30
Prabhpreet Dua
92824bb5b0 Integration test- add delay between server and client 2024-04-16 06:55:24 +05:30
Prabhpreet Dua
8d20e77173 Serialize integration tests 2024-04-16 06:45:05 +05:30
Prabhpreet Dua
15aafe7563 Cargo fmt fix 2024-04-15 22:13:14 +05:30
Prabhpreet Dua
b56af8b696 Simplify integration test 2024-04-15 22:10:40 +05:30
Prabhpreet Dua
a3e91a95df Fix post merge integration test issue 2024-04-15 14:25:09 +05:30
Prabhpreet Dua
4ea51ab123 Merge branch 'main' into feat/cookie-mechanism 2024-04-14 18:53:51 +05:30
dependabot[bot]
4b849a4fe4 build(deps): bump anyhow from 1.0.81 to 1.0.82
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.81 to 1.0.82.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.81...1.0.82)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-11 11:47:09 +02:00
dependabot[bot]
16e67269ba build(deps): bump thiserror from 1.0.50 to 1.0.58
Bumps [thiserror](https://github.com/dtolnay/thiserror) from 1.0.50 to 1.0.58.
- [Release notes](https://github.com/dtolnay/thiserror/releases)
- [Commits](https://github.com/dtolnay/thiserror/compare/1.0.50...1.0.58)

---
updated-dependencies:
- dependency-name: thiserror
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-11 11:46:59 +02:00
wucke13
76d5093a20 chore: apply .ci/gen-workflow-files.nu script
There is still hand-written stuff in the workflow file that we need to
get rid of, but now at least all autogenerated dependency fields are
sorted.
2024-04-06 17:45:34 +02:00
wucke13
0e8945db78 fix: .ci/gen-workflow-files.nu script
- Fix std log import (remove the asterisk)
- Add sort to dependencies field to make script output deterministic
- Remove whitespace error at EOF
- Add nushell to the default devShell, so that the script can be ran
  from the devShell
2024-04-06 17:45:34 +02:00
wucke13
ffd81b6a72 chore: update flake.lock
Six months passed, time to get up-to-date again.
2024-04-06 17:45:34 +02:00
wucke13
d1d218ac0f chore: add dedicated nixpkgs input to flake
This ensures that `nix flake update` doesn't create surprises on
different systems.
2024-04-06 17:45:34 +02:00
dependabot[bot]
0edfb625e8 build(deps): bump log from 0.4.20 to 0.4.21
Bumps [log](https://github.com/rust-lang/log) from 0.4.20 to 0.4.21.
- [Release notes](https://github.com/rust-lang/log/releases)
- [Changelog](https://github.com/rust-lang/log/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/log/compare/0.4.20...0.4.21)

---
updated-dependencies:
- dependency-name: log
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-06 15:14:03 +02:00
dependabot[bot]
16c0080cdc build(deps): bump memoffset from 0.9.0 to 0.9.1
Bumps [memoffset](https://github.com/Gilnaa/memoffset) from 0.9.0 to 0.9.1.
- [Changelog](https://github.com/Gilnaa/memoffset/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Gilnaa/memoffset/compare/v0.9.0...v0.9.1)

---
updated-dependencies:
- dependency-name: memoffset
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-06 15:13:35 +02:00
dependabot[bot]
b05c4bbe24 build(deps): bump serde from 1.0.193 to 1.0.197
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.193 to 1.0.197.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.193...v1.0.197)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-06 15:13:21 +02:00
dependabot[bot]
639c65ef93 build(deps): bump env_logger from 0.10.1 to 0.10.2
Bumps [env_logger](https://github.com/rust-cli/env_logger) from 0.10.1 to 0.10.2.
- [Release notes](https://github.com/rust-cli/env_logger/releases)
- [Changelog](https://github.com/rust-cli/env_logger/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rust-cli/env_logger/compare/v0.10.1...v0.10.2)

---
updated-dependencies:
- dependency-name: env_logger
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-21 00:39:00 +01:00
dependabot[bot]
332c549305 build(deps): bump anyhow from 1.0.75 to 1.0.81
Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.75 to 1.0.81.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.75...1.0.81)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-21 00:38:44 +01:00
dependabot[bot]
ef973e9d7f build(deps): bump base64 from 0.21.5 to 0.21.7
Bumps [base64](https://github.com/marshallpierce/rust-base64) from 0.21.5 to 0.21.7.
- [Changelog](https://github.com/marshallpierce/rust-base64/blob/master/RELEASE-NOTES.md)
- [Commits](https://github.com/marshallpierce/rust-base64/compare/v0.21.5...v0.21.7)

---
updated-dependencies:
- dependency-name: base64
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-21 00:38:34 +01:00
Paul Spooren
199ecb814b dependabot: add configuration
This checks daily for outdated cargo crates.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-03-20 14:24:34 +01:00
Paul Spooren
40d955a156 proper permission for secrets aka 0o600
When creating secret keys or use the out file feature, the material
shouldn't be readble to everyone by default.

Fix: #260

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-03-20 14:24:23 +01:00
Karolin Varner
cd23e9a2d0 fix: Failing tests 2024-03-12 22:34:31 -04:00
Karolin Varner
4d482aaab7 chore: Cargo fmt & fix 2024-03-12 22:11:17 -04:00
Karolin Varner
3175b7b783 Merge branch 'main' into feat/cookie-mechanism 2024-03-12 22:08:04 -04:00
Paul Spooren
baa35af558 bench: exclude rosenpass-fuzzing
This stops fuzzing to run which takes forever and breaks the CI.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-03-12 19:28:27 +01:00
Paul Spooren
b2de384fcf constant-time: add secure memcmp_le function
The compare function should do a little-endian comparision, therefore
copy the code from quinier/memsec and don't revert the loop, tada, le.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-03-11 13:08:41 +01:00
Paul Spooren
c69fd889fb ci: enable cargo bench again
It only takes a few seconds to run, enable it.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-03-11 13:08:41 +01:00
Dimitris Apostolou
13a853ff42 fix: Fix crate vulnerabilities 2024-03-10 18:11:43 +01:00
Paul Spooren
13df700ef5 flake: drop overlay due to upstream fix
Upstream fix #216904 got fixed to remove the extra overlay.

Signed-off-by: Paul Spooren <mail@aparcar.org>
2024-03-08 20:22:41 +01:00
Prabhpreet Dua
19a0a22b62 Cargo fmt 2024-02-18 14:13:33 +05:30
Prabhpreet Dua
b51466eaec Add intg test to pipeline 2024-02-18 14:10:49 +05:30
Prabhpreet Dua
9552d5a46c Merge branch 'main' into feat/cookie-mechanism 2024-02-18 13:25:01 +05:30
Prabhpreet Dua
a1d61bb48e Evaluate both active and retired cookies- cookie rotation 2024-02-18 13:19:22 +05:30
Prabhpreet Dua
ec0b5f7fb1 Cargo fmt 2024-02-04 20:18:58 +05:30
Prabhpreet Dua
0b4699e24a Poll based under load with intg test 2024-02-04 20:17:28 +05:30
Prabhpreet Dua
d18107b3a9 Merge branch 'poll-based-under-load-in-progress' into feat/cookie-mechanism 2024-02-04 11:53:05 +05:30
Prabhpreet Dua
715893e1ac cargo fmt 2024-02-04 11:49:08 +05:30
Prabhpreet Dua
92b2f6bc7c Match to main 2024-02-04 11:48:49 +05:30
Prabhpreet Dua
3498ab2d7b Checkpoint 2024-02-04 11:39:34 +05:30
Prabhpreet Dua
efd0ce51cb On-stack allocated host identification 2024-01-21 13:53:05 +05:30
Prabhpreet Dua
7739020931 Cargo fmt 2024-01-15 19:35:08 +05:30
Prabhpreet Dua
ecfecbb8f9 Host identification 2024-01-15 18:57:16 +05:30
Prabhpreet Dua
e8a81102f4 Whitepaper updates per review comments 2024-01-07 16:59:55 +05:30
Prabhpreet Dua
591e5226fd Merge branch 'main' into feat/cookie-mechanism 2024-01-06 17:25:04 +05:30
Prabhpreet Dua
b336a0d264 Separate cookie message from envelope encapsulation, remove mac, cookie field 2023-12-12 07:24:08 +05:30
Prabhpreet Dua
0b7bec75de Use common CookieStore for biscuit, and cookie secret, add padding to CookieReply, trigger immediate retransmission on recieving cookie reply 2023-12-10 18:17:37 +05:30
Prabhpreet Dua
87bbd1eef7 Reuse lifecycle (biscuit mechanism) for cookie expiration 2023-12-10 17:10:12 +05:30
Prabhpreet Dua
2646dc8398 Further updates to whitepaper 2023-12-08 00:13:55 +05:30
Prabhpreet Dua
4295ec9d80 Whitepaper changes, and reflect in code 2023-12-07 23:59:40 +05:30
Prabhpreet Dua
7cb643b181 app_server move under load handling to function, cargo fmt 2023-12-07 22:53:17 +05:30
Prabhpreet Dua
109d624227 SID specific cookie storage 2023-12-07 20:19:57 +05:30
Prabhpreet Dua
b96d195f54 Avoid memory allocations ctd 2023-12-06 23:02:57 +05:30
Prabhpreet Dua
775b464496 Remove debug message 2023-12-06 22:32:35 +05:30
Prabhpreet Dua
e2cd25c184 Use retransmitted message instead of storing last sent mac 2023-12-06 21:59:52 +05:30
Prabhpreet Dua
fdcb488d4b Move IP+Port into AppServer from protocol.rs 2023-12-06 21:28:21 +05:30
Prabhpreet Dua
a8a596ca7e Remove debug messages 2023-12-06 05:40:34 +05:30
Prabhpreet Dua
9ced9996d2 Remove serial_test deps 2023-12-05 06:40:35 +05:30
Prabhpreet Dua
df683f96b2 Remove ignore from second test, init libsodium in that test too 2023-12-05 06:30:59 +05:30
Prabhpreet Dua
27a8bdbe7b Init libsodium in failing test 2023-12-05 06:26:41 +05:30
Prabhpreet Dua
bdabae9c33 Remove ignore for one test 2023-12-05 06:20:01 +05:30
Prabhpreet Dua
4d7c030476 Ignore existing tests 2023-12-05 06:15:19 +05:30
Prabhpreet Dua
95f22e98ac Try all tests running in serial for protocol 2023-12-05 06:08:59 +05:30
Prabhpreet Dua
b0dada7613 cargo fmt run 2023-12-05 06:00:11 +05:30
Prabhpreet Dua
e54ea1feaa Add parallel test flag, and remove .orig files 2023-12-05 05:58:13 +05:30
Prabhpreet Dua
0fd09c908b Merge branch 'main' into feat/cookie-mechanism 2023-12-03 21:06:14 +05:30
Prabhpreet Dua
36628a46d6 Serial test execution for cookie exchange 2023-12-03 20:54:26 +05:30
Prabhpreet Dua
2904c90d4b Cargo fmt 2023-12-02 19:38:10 +05:30
Prabhpreet Dua
f0dbe2bb54 Merge branch 'main' into feat/cookie-mechanism 2023-12-02 19:36:04 +05:30
Prabhpreet Dua
e2792272e8 Merge branch 'main' into feat/cookie-mechanism 2023-11-29 21:01:32 +05:30
Prabhpreet Dua
1c65e67be2 Fix cargo fmt lint 2023-11-29 05:10:18 +05:30
Prabhpreet Dua
2ae3d6c271 Merge branch 'main' into feat/cookie-mechanism, ignore incoming messages when initiator is under load, whitepaper updates 2023-11-29 05:06:22 +05:30
Prabhpreet Dua
96d4f0b545 Merge branch 'main' into feat/cookie-mechanism 2023-11-19 12:06:33 +05:30
Prabhpreet Dua
ad947a755c Add test with initiator under load, add section to WP 2023-11-19 11:30:12 +05:30
Prabhpreet Dua
35f9c3bf68 Cookie reply processing continued 2023-11-14 21:42:48 +05:30
Prabhpreet Dua
ff44002b7c Cookie reply handler test setup 2023-11-13 20:57:34 +05:30
Prabhpreet Dua
0ce8304c69 Merge branch 'main' into feat/cookie-mechanism 2023-11-13 14:55:10 +05:30
Prabhpreet Dua
5f91feb3a4 Checkpoint- cookie reply handler 2023-11-13 14:47:39 +05:30
Prabhpreet Dua
baebb8632f Merge branch 'main' into feat/cookie-mechanism 2023-11-09 19:33:35 +05:30
Prabhpreet Dua
cb97f90581 cookie mechanism trigger- send cookie reply message under load if cookie not valid 2023-11-09 19:28:02 +05:30
Prabhpreet Dua
3d13caa37b cookie mechanism trigger- under load condition and mio event length 2023-11-07 20:49:43 +05:30
Prabhpreet Dua
54ecfaddcf Whitepaper- add cookie mechanism description draft 2023-10-25 01:43:08 +05:30
83 changed files with 6850 additions and 1285 deletions

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env nu
use log *
use std log
# cd to git root
cd (git rev-parse --show-toplevel)
@@ -116,6 +116,7 @@ for system in ($targets | columns) {
} }
| filter {|it| $it.needed}
| each {|it| job-id $system $it.name}
| sort
)
mut new_job = {
@@ -197,4 +198,4 @@ $cachix_workflow | to yaml | save --force .github/workflows/nix.yaml
$release_workflow | to yaml | save --force .github/workflows/release.yaml
log info "prettify generated yaml"
prettier -w .github/workflows/
prettier -w .github/workflows/

1
.devcontainer/Dockerfile Normal file
View File

@@ -0,0 +1 @@
FROM ghcr.io/xtruder/nix-devcontainer:v1

View File

@@ -0,0 +1,33 @@
// For format details, see https://aka.ms/vscode-remote/devcontainer.json or the definition README at
// https://github.com/microsoft/vscode-dev-containers/tree/master/containers/docker-existing-dockerfile
{
"name": "devcontainer-project",
"dockerFile": "Dockerfile",
"context": "${localWorkspaceFolder}",
"build": {
"args": {
"USER_UID": "${localEnv:USER_UID}",
"USER_GID": "${localEnv:USER_GID}"
}
},
// run arguments passed to docker
"runArgs": ["--security-opt", "label=disable"],
// disable command overriding and updating remote user ID
"overrideCommand": false,
"userEnvProbe": "loginShell",
"updateRemoteUserUID": false,
// build development environment on creation, make sure you already have shell.nix
"onCreateCommand": "nix develop",
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [],
"customizations": {
"vscode": {
"extensions": ["rust-lang.rust-analyzer", "tamasfe.even-better-toml"]
}
}
}

14
.github/codecov.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
codecov:
branch: main
coverage:
status:
project:
default:
# basic
target: auto #default
threshold: 5
base: auto
if_ci_failed: error #success, failure, error, ignore
informational: false
only_pulls: true
patch: off

6
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "cargo"
directory: "/"
schedule:
interval: "daily"

63
.github/workflows/dependent-issues.yml vendored Normal file
View File

@@ -0,0 +1,63 @@
name: Dependent Issues
on:
issues:
types:
- opened
- edited
- closed
- reopened
pull_request_target:
types:
- opened
- edited
- closed
- reopened
# Makes sure we always add status check for PRs. Useful only if
# this action is required to pass before merging. Otherwise, it
# can be removed.
- synchronize
# Schedule a daily check. Useful if you reference cross-repository
# issues or pull requests. Otherwise, it can be removed.
schedule:
- cron: "0 0 * * *"
jobs:
check:
permissions:
issues: write
pull-requests: write
statuses: write
runs-on: ubuntu-latest
steps:
- uses: z0al/dependent-issues@v1
env:
# (Required) The token to use to make API calls to GitHub.
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# (Optional) The token to use to make API calls to GitHub for remote repos.
GITHUB_READ_TOKEN: ${{ secrets.GITHUB_READ_TOKEN }}
with:
# (Optional) The label to use to mark dependent issues
label: dependent
# (Optional) Enable checking for dependencies in issues.
# Enable by setting the value to "on". Default "off"
check_issues: off
# (Optional) Ignore dependabot PRs.
# Enable by setting the value to "on". Default "off"
ignore_dependabot: off
# (Optional) A comma-separated list of keywords. Default
# "depends on, blocked by"
keywords: depends on, blocked by
# (Optional) A custom comment body. It supports `{{ dependencies }}` token.
comment: >
This PR/issue depends on:
{{ dependencies }}
By **[Dependent Issues](https://github.com/z0al/dependent-issues)** (🤖). Happy coding!

View File

@@ -95,6 +95,7 @@ jobs:
- macos-13
needs:
- x86_64-darwin---rosenpass
- x86_64-darwin---rp
- x86_64-darwin---rosenpass-oci-image
steps:
- uses: actions/checkout@v3
@@ -123,6 +124,22 @@ jobs:
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-darwin.rosenpass --print-build-logs
x86_64-darwin---rp:
name: Build x86_64-darwin.rp
runs-on:
- macos-13
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-darwin.rp --print-build-logs
x86_64-darwin---rosenpass-oci-image:
name: Build x86_64-darwin.rosenpass-oci-image
runs-on:
@@ -210,8 +227,9 @@ jobs:
runs-on:
- ubuntu-latest
needs:
- x86_64-linux---rosenpass-static-oci-image
- x86_64-linux---rosenpass-static
- x86_64-linux---rosenpass-static-oci-image
- x86_64-linux---rp-static
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
@@ -230,6 +248,7 @@ jobs:
needs:
- aarch64-linux---rosenpass-oci-image
- aarch64-linux---rosenpass
- aarch64-linux---rp
steps:
- run: |
DEBIAN_FRONTEND=noninteractive
@@ -283,6 +302,27 @@ jobs:
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.aarch64-linux.rosenpass --print-build-logs
aarch64-linux---rp:
name: Build aarch64-linux.rp
runs-on:
- ubuntu-latest
needs: []
steps:
- run: |
DEBIAN_FRONTEND=noninteractive
sudo apt-get update -q -y && sudo apt-get install -q -y qemu-system-aarch64 qemu-efi binfmt-support qemu-user-static
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
extra_nix_config: |
system = aarch64-linux
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.aarch64-linux.rp --print-build-logs
x86_64-linux---rosenpass-oci-image:
name: Build x86_64-linux.rosenpass-oci-image
runs-on:
@@ -338,6 +378,22 @@ jobs:
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.rosenpass-static --print-build-logs
x86_64-linux---rp-static:
name: Build x86_64-linux.rp-static
runs-on:
- ubuntu-latest
needs: []
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v22
with:
nix_path: nixpkgs=channel:nixos-unstable
- uses: cachix/cachix-action@v12
with:
name: rosenpass
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
- name: Build
run: nix build .#packages.x86_64-linux.rp-static --print-build-logs
x86_64-linux---rosenpass-static-oci-image:
name: Build x86_64-linux.rosenpass-static-oci-image
runs-on:

View File

@@ -46,12 +46,22 @@ jobs:
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install libsodium
run: sudo apt-get install -y libsodium-dev
# liboqs requires quite a lot of stack memory, thus we adjust
# the default stack size picked for new threads (which is used
# by `cargo test`) to be _big enough_. Setting it to 8 MiB
- run: RUST_MIN_STACK=8388608 cargo bench --no-run --workspace
- run: RUST_MIN_STACK=8388608 cargo bench --workspace --exclude rosenpass-fuzzing
mandoc:
name: mandoc
runs-on: ubuntu-latest
steps:
- name: Install mandoc
run: sudo apt-get install -y mandoc
- uses: actions/checkout@v3
- name: Check rosenpass.1
run: doc/check.sh doc/rosenpass.1
- name: Check rp.1
run: doc/check.sh doc/rp.1
cargo-audit:
runs-on: ubuntu-latest
@@ -75,8 +85,6 @@ jobs:
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- run: rustup component add clippy
- name: Install libsodium
run: sudo apt-get install -y libsodium-dev
- uses: actions-rs/clippy-check@v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
@@ -96,8 +104,6 @@ jobs:
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- run: rustup component add clippy
- name: Install libsodium
run: sudo apt-get install -y libsodium-dev
# `--no-deps` used as a workaround for a rust compiler bug. See:
# - https://github.com/rosenpass/rosenpass/issues/62
# - https://github.com/rust-lang/rust/issues/108378
@@ -116,8 +122,6 @@ jobs:
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install libsodium
run: sudo apt-get install -y libsodium-dev
# liboqs requires quite a lot of stack memory, thus we adjust
# the default stack size picked for new threads (which is used
# by `cargo test`) to be _big enough_. Setting it to 8 MiB
@@ -159,8 +163,6 @@ jobs:
~/.cargo/git/db/
target/
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install libsodium
run: sudo apt-get install -y libsodium-dev
- name: Install nightly toolchain
run: |
rustup toolchain install nightly
@@ -174,5 +176,27 @@ jobs:
cargo fuzz run fuzz_handle_msg -- -max_total_time=5
ulimit -s 8192000 && RUST_MIN_STACK=33554432000 && cargo fuzz run fuzz_kyber_encaps -- -max_total_time=5
cargo fuzz run fuzz_mceliece_encaps -- -max_total_time=5
cargo fuzz run fuzz_box_secret_alloc -- -max_total_time=5
cargo fuzz run fuzz_vec_secret_alloc -- -max_total_time=5
cargo fuzz run fuzz_box_secret_alloc_malloc -- -max_total_time=5
cargo fuzz run fuzz_box_secret_alloc_memfdsec -- -max_total_time=5
cargo fuzz run fuzz_box_secret_alloc_memfdsec_mallocfb -- -max_total_time=5
cargo fuzz run fuzz_vec_secret_alloc_malloc -- -max_total_time=5
cargo fuzz run fuzz_vec_secret_alloc_memfdsec -- -max_total_time=5
cargo fuzz run fuzz_vec_secret_alloc_memfdsec_mallocfb -- -max_total_time=5
codecov:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: rustup component add llvm-tools-preview
- run: |
cargo install cargo-llvm-cov || true
cargo llvm-cov --lcov --output-path coverage.lcov
# If using tarapulin
#- run: cargo install cargo-tarpaulin
#- run: cargo tarpaulin --out Xml
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v4.0.1
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./coverage.lcov
verbose: true

1622
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -11,11 +11,11 @@ members = [
"to",
"fuzz",
"secret-memory",
"rp",
"wireguard-broker",
]
default-members = [
"rosenpass"
]
default-members = ["rosenpass", "rp", "wireguard-broker"]
[workspace.metadata.release]
# ensure that adding `--package` as argument to `cargo release` still creates version tags in the form of `vx.y.z`
@@ -30,32 +30,53 @@ rosenpass-ciphers = { path = "ciphers" }
rosenpass-to = { path = "to" }
rosenpass-secret-memory = { path = "secret-memory" }
rosenpass-oqs = { path = "oqs" }
criterion = "0.4.0"
test_bin = "0.4.0"
libfuzzer-sys = "0.4"
stacker = "0.1.15"
rosenpass-wireguard-broker = { path = "wireguard-broker" }
doc-comment = "0.3.3"
base64 = "0.21.5"
zeroize = "1.7.0"
memoffset = "0.9.0"
thiserror = "1.0.50"
paste = "1.0.14"
env_logger = "0.10.1"
base64ct = {version = "1.6.0", default-features=false}
zeroize = "1.8.1"
memoffset = "0.9.1"
thiserror = "1.0.61"
paste = "1.0.15"
env_logger = "0.10.2"
toml = "0.7.8"
static_assertions = "1.1.0"
allocator-api2 = "0.2.14"
allocator-api2-tests = "0.2.14"
memsec = "0.6.3"
memsec = { git="https://github.com/rosenpass/memsec.git" ,rev="aceb9baee8aec6844125bd6612f92e9a281373df", features = [ "alloc_ext", ] }
rand = "0.8.5"
typenum = "1.17.0"
log = { version = "0.4.20" }
clap = { version = "4.4.10", features = ["derive"] }
serde = { version = "1.0.193", features = ["derive"] }
log = { version = "0.4.21" }
clap = { version = "4.5.7", features = ["derive"] }
serde = { version = "1.0.203", features = ["derive"] }
arbitrary = { version = "1.3.2", features = ["derive"] }
anyhow = { version = "1.0.75", features = ["backtrace", "std"] }
mio = { version = "0.8.9", features = ["net", "os-poll"] }
oqs-sys = { version = "0.8", default-features = false, features = ['classic_mceliece', 'kyber'] }
anyhow = { version = "1.0.86", features = ["backtrace", "std"] }
mio = { version = "0.8.11", features = ["net", "os-poll"] }
oqs-sys = { version = "0.9.1", default-features = false, features = [
'classic_mceliece',
'kyber',
] }
blake2 = "0.10.6"
chacha20poly1305 = { version = "0.10.1", default-features = false, features = [ "std", "heapless" ] }
zerocopy = { version = "0.7.32", features = ["derive"] }
chacha20poly1305 = { version = "0.10.1", default-features = false, features = [
"std",
"heapless",
] }
zerocopy = { version = "0.7.34", features = ["derive"] }
home = "0.5.9"
derive_builder = "0.20.0"
tokio = { version = "1.38", features = ["macros", "rt-multi-thread"] }
postcard= {version = "1.0.8", features = ["alloc"]}
#Dev dependencies
serial_test = "3.1.1"
tempfile = "3"
stacker = "0.1.15"
libfuzzer-sys = "0.4"
test_bin = "0.4.0"
criterion = "0.4.0"
allocator-api2-tests = "0.2.15"
procspawn = {version = "1.0.0", features= ["test-support"]}
#Broker dependencies (might need cleanup or changes)
wireguard-uapi = "3.0.0"
command-fds = "0.2.3"
rustix = { version = "0.38.27", features = ["net"] }

View File

@@ -3,33 +3,12 @@
#define SESSION_START_EVENTS 0
#define RANDOMIZED_CALL_IDS 0
#include "config.mpv"
#include "prelude/basic.mpv"
#include "crypto/key.mpv"
#include "crypto/kem.mpv"
#include "rosenpass/oracles.mpv"
nounif v:seed_prec; attacker(prepare_seed(trusted_seed( v )))/6217[hypothesis].
nounif v:seed; attacker(prepare_seed( v ))/6216[hypothesis].
nounif v:seed; attacker(rng_kem_sk( v ))/6215[hypothesis].
nounif v:seed; attacker(rng_key( v ))/6214[hypothesis].
nounif v:key_prec; attacker(prepare_key(trusted_key( v )))/6213[hypothesis].
nounif v:kem_sk_prec; attacker(prepare_kem_sk(trusted_kem_sk( v )))/6212[hypothesis].
nounif v:key; attacker(prepare_key( v ))/6211[hypothesis].
nounif v:kem_sk; attacker(prepare_kem_sk( v ))/6210[hypothesis].
nounif Spk:kem_sk_tmpl;
attacker(Creveal_kem_pk(Spk))/6110[conclusion].
nounif sid:SessionId, Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt:kem_sk_tmpl, Seski:seed_tmpl, Ssptr:seed_tmpl;
attacker(Cinitiator( *sid, *Ssskm, *Spsk, *Sspkt, *Seski, *Ssptr ))/6109[conclusion].
nounif sid:SessionId, biscuit_no:Atom, Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt:kem_sk_tmpl, Septi:seed_tmpl, Sspti:seed_tmpl, ih:InitHello_t;
attacker(Cinit_hello( *sid, *biscuit_no, *Ssskm, *Spsk, *Sspkt, *Septi, *Sspti, *ih ))/6108[conclusion].
nounif rh:RespHello_t;
attacker(Cresp_hello( *rh ))/6107[conclusion].
nounif Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt:kem_sk_tmpl, ic:InitConf_t;
attacker(Cinit_conf( *Ssskm, *Spsk, *Sspkt, *ic ))/6106[conclusion].
let main = rosenpass_main.
@lemma "state coherence, initiator: Initiator accepting a RespHello message implies they also generated the associated InitHello message"

View File

@@ -10,26 +10,6 @@
let main = rosenpass_main.
nounif v:seed_prec; attacker(prepare_seed(trusted_seed( v )))/6217[hypothesis].
nounif v:seed; attacker(prepare_seed( v ))/6216[hypothesis].
nounif v:seed; attacker(rng_kem_sk( v ))/6215[hypothesis].
nounif v:seed; attacker(rng_key( v ))/6214[hypothesis].
nounif v:key_prec; attacker(prepare_key(trusted_key( v )))/6213[hypothesis].
nounif v:kem_sk_prec; attacker(prepare_kem_sk(trusted_kem_sk( v )))/6212[hypothesis].
nounif v:key; attacker(prepare_key( v ))/6211[hypothesis].
nounif v:kem_sk; attacker(prepare_kem_sk( v ))/6210[hypothesis].
nounif Spk:kem_sk_tmpl;
attacker(Creveal_kem_pk(Spk))/6110[conclusion].
nounif sid:SessionId, Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt:kem_sk_tmpl, Seski:seed_tmpl, Ssptr:seed_tmpl;
attacker(Cinitiator( *sid, *Ssskm, *Spsk, *Sspkt, *Seski, *Ssptr ))/6109[conclusion].
nounif sid:SessionId, biscuit_no:Atom, Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt:kem_sk_tmpl, Septi:seed_tmpl, Sspti:seed_tmpl, ih:InitHello_t;
attacker(Cinit_hello( *sid, *biscuit_no, *Ssskm, *Spsk, *Sspkt, *Septi, *Sspti, *ih ))/6108[conclusion].
nounif rh:RespHello_t;
attacker(Cresp_hello( *rh ))/6107[conclusion].
nounif Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt:kem_sk_tmpl, ic:InitConf_t;
attacker(Cinit_conf( *Ssskm, *Spsk, *Sspkt, *ic ))/6106[conclusion].
@lemma "non-interruptability: Adv cannot prevent a genuine InitHello message from being accepted"
lemma ih:InitHello_t, psk:key, sski:kem_sk, sskr:kem_sk;
event(IHRjct(ih, psk, sskr, kem_pub(sski)))

View File

@@ -88,18 +88,6 @@ set verboseCompleted=VERBOSE.
#define SES_EV(...)
#endif
#if COOKIE_EVENTS
#define COOKIE_EV(...) __VA_ARGS__
#else
#define COOKIE_EV(...)
#endif
#if KEM_EVENTS
#define KEM_EV(...) __VA_ARGS__
#else
#define KEM_EV(...)
#endif
(* TODO: Authentication timing properties *)
(* TODO: Proof that every adversary submitted package is equivalent to one generated by the proper algorithm using different coins. This probably requires introducing an oracle that extracts the coins used and explicitly adding the notion of coins used for Packet->Packet steps and an inductive RNG notion. *)

View File

@@ -41,32 +41,23 @@ restriction s:seed, p1:Atom, p2:Atom, ad1:Atom, ad2:Atom;
event(ConsumeSeed(p1, s, ad1)) && event(ConsumeSeed(p2, s, ad2))
==> p1 = p2 && ad1 = ad2.
letfun create_mac2(k:key, msg:bits) = prf(k,msg).
#include "rosenpass/responder.macro"
fun Cinit_conf(kem_sk_tmpl, key_tmpl, kem_pk_tmpl, InitConf_t) : Atom [data].
CK_EV( event OskOinit_conf(key, key). )
MTX_EV( event ICRjct(InitConf_t, key, kem_sk, kem_pk). )
SES_EV( event ResponderSession(InitConf_t, key). )
KEM_EV(event Oinit_conf_KemUse(SessionId, SessionId, Atom).)
#ifdef KEM_EVENTS
restriction sidi:SessionId, sidr:SessionId, ad1:Atom, ad2:Atom;
event(Oinit_conf_KemUse(sidi, sidr, ad1)) && event(Oinit_conf_KemUse(sidi, sidr, ad2))
==> ad1 = ad2.
#endif
event ConsumeBiscuit(Atom, kem_sk, kem_pk, Atom).
fun Ccookie(key, bits) : Atom[data].
let Oinit_conf_inner(Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt:kem_sk_tmpl, ic:InitConf_t, call:Atom) =
let Oinit_conf() =
in(C, Cinit_conf(Ssskm, Spsk, Sspkt, ic));
#if RANDOMIZED_CALL_IDS
new call:Atom;
#else
call <- Cinit_conf(Ssskm, Spsk, Sspkt, ic);
#endif
SETUP_HANDSHAKE_STATE()
eski <- kem_sk0;
epki <- kem_pk0;
let try_ = (
let InitConf(sidi, sidr, biscuit, auth) = ic in
KEM_EV(event Oinit_conf_KemUse(sidi, sidr, call);)
INITCONF_CONSUME()
event ConsumeBiscuit(biscuit_no, sskm, spkt, call);
CK_EV( event OskOinit_conf(ck_rh, osk); )
@@ -81,21 +72,11 @@ let Oinit_conf_inner(Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt:kem_sk_tmpl, ic:Ini
0
#endif
).
let Oinit_conf() =
in(C, Cinit_conf(Ssskm, Spsk, Sspkt, ic));
#if RANDOMIZED_CALL_IDS
new call:Atom;
#else
call <- Cinit_conf(Ssskm, Spsk, Sspkt, ic);
#endif
Oinit_conf_inner(Ssskm, Spsk, Sspkt, ic, call).
restriction biscuit_no:Atom, sskm:kem_sk, spkr:kem_pk, ad1:Atom, ad2:Atom;
event(ConsumeBiscuit(biscuit_no, sskm, spkr, ad1)) && event(ConsumeBiscuit(biscuit_no, sskm, spkr, ad2))
==> ad1 = ad2.
// TODO: Restriction biscuit no invalidation
#include "rosenpass/initiator.macro"
@@ -104,56 +85,27 @@ CK_EV( event OskOresp_hello(key, key, key). )
MTX_EV( event RHRjct(RespHello_t, key, kem_sk, kem_pk). )
MTX_EV( event ICSent(RespHello_t, InitConf_t, key, kem_sk, kem_pk). )
SES_EV( event InitiatorSession(RespHello_t, key). )
KEM_EV(event Oresp_hello_KemUse(SessionId, SessionId, Atom).)
#ifdef KEM_EVENTS
restriction sidi:SessionId, sidr:SessionId, ad1:Atom, ad2:Atom;
event(Oresp_hello_KemUse(sidi, sidr, ad1)) && event(Oresp_hello_KemUse(sidi, sidr, ad2))
==> ad1 = ad2.
#endif
#ifdef COOKIE_EVENTS
COOKIE_EVENTS(Oresp_hello)
#endif
let Oresp_hello(HS_DECL_ARGS, C_in:channel, call:Atom) =
in(C_in, Cresp_hello(RespHello(sidr, =sidi, ecti, scti, biscuit, auth)));
in(C_in, mac2_key:key);
let Oresp_hello(HS_DECL_ARGS) =
in(C, Cresp_hello(RespHello(sidr, =sidi, ecti, scti, biscuit, auth)));
rh <- RespHello(sidr, sidi, ecti, scti, biscuit, auth);
#ifdef COOKIE_EVENTS
msg <- RH2b(rh);
COOKIE_PROCESS(Oresp_hello,
#endif
/* try */ let ic = (
ck_ini <- ck;
KEM_EV(event Oresp_hello_KemUse(sidi, sidr, call);)
RESPHELLO_CONSUME()
ck_ih <- ck;
INITCONF_PRODUCE()
CK_EV (event OskOresp_hello(ck_ini, ck_ih, osk); ) // TODO: Queries testing that there is no duplication
MTX_EV( event ICSent(rh, ic, psk, sski, spkr); )
SES_EV( event InitiatorSession(rh, osk); )
ic
/* success */ ) in (
icbits <- IC2b(ic);
mac <- create_mac(spkt, icbits);
mac2 <- create_mac2(mac2_key, mac_envelope2b(mac));
out(C_in, ic);
out(C_in, mac);
out(C_in, mac2)
/* fail */ ) else (
#if MESSAGE_TRANSMISSION_EVENTS
event RHRjct(rh, psk, sski, spkr)
#else
0
#endif
)
#ifdef COOKIE_EVENTS
)
/* try */ let ic = (
ck_ini <- ck;
RESPHELLO_CONSUME()
ck_ih <- ck;
INITCONF_PRODUCE()
CK_EV (event OskOresp_hello(ck_ini, ck_ih, osk); ) // TODO: Queries testing that there is no duplication
MTX_EV( event ICSent(rh, ic, psk, sski, spkr); )
SES_EV( event InitiatorSession(rh, osk); )
ic
/* success */ ) in (
out(C, ic)
/* fail */ ) else (
#if MESSAGE_TRANSMISSION_EVENTS
event RHRjct(rh, psk, sski, spkr)
#else
.
0
#endif
).
// TODO: Restriction: Biscuit no invalidation
@@ -164,33 +116,24 @@ MTX_EV( event IHRjct(InitHello_t, key, kem_sk, kem_pk). )
MTX_EV( event RHSent(InitHello_t, RespHello_t, key, kem_sk, kem_pk). )
event ConsumeSidr(SessionId, Atom).
event ConsumeBn(Atom, kem_sk, kem_pk, Atom).
KEM_EV(event Oinit_hello_KemUse(SessionId, SessionId, Atom).)
#ifdef KEM_EVENTS
restriction sidi:SessionId, sidr:SessionId, ad1:Atom, ad2:Atom;
event(Oinit_hello_KemUse(sidi, sidr, ad1)) && event(Oinit_hello_KemUse(sidi, sidr, ad2))
==> ad1 = ad2.
let Oinit_hello() =
in(C, Cinit_hello(sidr, biscuit_no, Ssskm, Spsk, Sspkt, Septi, Sspti, ih));
#if RANDOMIZED_CALL_IDS
new call:Atom;
#else
call <- Cinit_hello(sidr, biscuit_no, Ssskm, Spsk, Sspkt, Septi, Sspti, ih);
#endif
let Oinit_hello_inner(sidm:SessionId, biscuit_no:Atom, Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt: kem_sk_tmpl, Septi: seed_tmpl, Sspti: seed_tmpl, ih: InitHello_t, mac2_key:key, C_out:channel, call:Atom) =
// TODO: This is ugly
let InitHello(sidi, epki, sctr, pidiC, auth) = ih in
SETUP_HANDSHAKE_STATE()
eski <- kem_sk0;
event ConsumeBn(biscuit_no, sskm, spkt, call);
event ConsumeSidr(sidr, call);
epti <- rng_key(setup_seed(Septi)); // RHR4
spti <- rng_key(setup_seed(Sspti)); // RHR5
event ConsumeBn(biscuit_no, sskm, spkt, call);
event ConsumeSidr(sidr, call);
event ConsumeSeed(Epti, setup_seed(Septi), call);
event ConsumeSeed(Spti, setup_seed(Sspti), call);
// out(C_out, spkt);
let rh = (
KEM_EV(event Oinit_hello_KemUse(sidi, sidr, call);)
INITHELLO_CONSUME()
ck_ini <- ck;
RESPHELLO_PRODUCE()
@@ -198,14 +141,7 @@ let Oinit_hello_inner(sidm:SessionId, biscuit_no:Atom, Ssskm:kem_sk_tmpl, Spsk:k
MTX_EV( event RHSent(ih, rh, psk, sskr, spki); )
rh
/* success */ ) in (
rhbits <- RH2b(rh);
mac <- create_mac(spkt, rhbits);
out(C_out, rh);
out(C_out, mac);
mac2 <- create_mac2(mac2_key, mac_envelope2b(mac));
out(C_out, mac2)
out(C, rh)
/* fail */ ) else (
#if MESSAGE_TRANSMISSION_EVENTS
event IHRjct(ih, psk, sskr, spki)
@@ -214,18 +150,6 @@ let Oinit_hello_inner(sidm:SessionId, biscuit_no:Atom, Ssskm:kem_sk_tmpl, Spsk:k
#endif
).
let Oinit_hello() =
in(C, Cinit_hello(sidr, biscuit_no, Ssskm, Spsk, Sspkt, Septi, Sspti, ih));
in(C, mac2_key:key);
#if RANDOMIZED_CALL_IDS
new call:Atom;
#else
call <- Cinit_hello(sidr, biscuit_no, Ssskm, Spsk, Sspkt, Septi, Sspti, ih);
#endif
Oinit_hello_inner(sidr, biscuit_no, Ssskm, Spsk, Sspkt, Septi, Sspti, ih, mac2_key, C, call).
restriction sid:SessionId, ad1:Atom, ad2:Atom;
event(ConsumeSidr(sid, ad1)) && event(ConsumeSidr(sid, ad2))
==> ad1 = ad2.
@@ -242,55 +166,27 @@ fun Cinitiator(SessionId, kem_sk_tmpl, key_tmpl, kem_pk_tmpl, seed_tmpl, seed_tm
CK_EV( event OskOinitiator_ck(key). )
CK_EV( event OskOinitiator(key, key, kem_sk, kem_pk, key). )
MTX_EV( event IHSent(InitHello_t, key, kem_sk, kem_pk). )
KEM_EV(event Oinitiator_inner_KemUse(SessionId, SessionId, Atom).)
#ifdef KEM_EVENTS
restriction sidi:SessionId, sidr:SessionId, ad1:Atom, ad2:Atom;
event(Oinitiator_inner_KemUse(sidi, sidr, ad1)) && event(Oinitiator_inner_KemUse(sidi, sidr, ad2))
==> ad1 = ad2.
#endif
event ConsumeSidi(SessionId, Atom).
let Oinitiator_inner(sidi: SessionId, Ssskm: kem_sk_tmpl, Spsk: key_tmpl, Sspkt: kem_sk_tmpl, Seski: seed_tmpl, Ssptr: seed_tmpl, last_cookie:key, C_out:channel, call:Atom) =
SETUP_HANDSHAKE_STATE()
sidr <- sid0;
KEM_EV(event Oinitiator_inner_KemUse(sidi, sidr, call);)
RNG_KEM_PAIR(eski, epki, Seski) // IHI3
sptr <- rng_key(setup_seed(Ssptr)); // IHI5
event ConsumeSidi(sidi, call);
event ConsumeSeed(Sptr, setup_seed(Ssptr), call);
event ConsumeSeed(Eski, setup_seed(Seski), call);
INITHELLO_PRODUCE()
CK_EV( event OskOinitiator_ck(ck); )
CK_EV( event OskOinitiator(ck, psk, sski, spkr, sptr); )
MTX_EV( event IHSent(ih, psk, sski, spkr); )
out(C_out, ih);
ihbits <- IH2b(ih);
mac <- create_mac(spkt, ihbits);
out(C_out, mac);
mac2 <- create_mac2(last_cookie, mac_envelope2b(mac));
out(C_out, mac2);
Oresp_hello(HS_PASS_ARGS, C_out, call).
let Oinitiator() =
in(C, Cinitiator(sidi, Ssskm, Spsk, Sspkt, Seski, Ssptr));
#if RANDOMIZED_CALL_IDS
new call:Atom;
#else
call <- Cinitiator(sidi, Ssskm, Spsk, Sspkt, Seski, Ssptr);
#endif
in(C, last_cookie:key);
Oinitiator_inner(sidi, Ssskm, Spsk, Sspkt, Seski, Ssptr, last_cookie, C, call).
#if RANDOMIZED_CALL_IDS
new call:Atom;
#else
call <- Cinitiator(sidi, Ssskm, Spsk, Sspkt, Seski, Ssptr);
#endif
SETUP_HANDSHAKE_STATE()
RNG_KEM_PAIR(eski, epki, Seski) // IHI3
sidr <- sid0;
sptr <- rng_key(setup_seed(Ssptr)); // IHI5
event ConsumeSidi(sidi, call);
event ConsumeSeed(Sptr, setup_seed(Ssptr), call);
event ConsumeSeed(Eski, setup_seed(Seski), call);
INITHELLO_PRODUCE()
CK_EV( event OskOinitiator_ck(ck); )
CK_EV( event OskOinitiator(ck, psk, sski, spkr, sptr); )
MTX_EV( event IHSent(ih, psk, sski, spkr); )
out(C, ih);
Oresp_hello(HS_PASS_ARGS).
restriction sid:SessionId, ad1:Atom, ad2:Atom;
event(ConsumeSidi(sid, ad1)) && event(ConsumeSidi(sid, ad2))
@@ -311,3 +207,21 @@ let rosenpass_main() = 0
| REP(RESPONDER_BOUND, Oinit_hello)
| REP(RESPONDER_BOUND, Oinit_conf).
nounif v:seed_prec; attacker(prepare_seed(trusted_seed( v )))/6217[hypothesis].
nounif v:seed; attacker(prepare_seed( v ))/6216[hypothesis].
nounif v:seed; attacker(rng_kem_sk( v ))/6215[hypothesis].
nounif v:seed; attacker(rng_key( v ))/6214[hypothesis].
nounif v:key_prec; attacker(prepare_key(trusted_key( v )))/6213[hypothesis].
nounif v:kem_sk_prec; attacker(prepare_kem_sk(trusted_kem_sk( v )))/6212[hypothesis].
nounif v:key; attacker(prepare_key( v ))/6211[hypothesis].
nounif v:kem_sk; attacker(prepare_kem_sk( v ))/6210[hypothesis].
nounif Spk:kem_sk_tmpl;
attacker(Creveal_kem_pk(Spk))/6110[conclusion].
nounif sid:SessionId, Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt:kem_sk_tmpl, Seski:seed_tmpl, Ssptr:seed_tmpl;
attacker(Cinitiator( *sid, *Ssskm, *Spsk, *Sspkt, *Seski, *Ssptr ))/6109[conclusion].
nounif sid:SessionId, biscuit_no:Atom, Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt:kem_sk_tmpl, Septi:seed_tmpl, Sspti:seed_tmpl, ih:InitHello_t;
attacker(Cinit_hello( *sid, *biscuit_no, *Ssskm, *Spsk, *Sspkt, *Septi, *Sspti, *ih ))/6108[conclusion].
nounif rh:RespHello_t;
attacker(Cresp_hello( *rh ))/6107[conclusion].
nounif Ssskm:kem_sk_tmpl, Spsk:key_tmpl, Sspkt:kem_sk_tmpl, ic:InitConf_t;
attacker(Cinit_conf( *Ssskm, *Spsk, *Sspkt, *ic ))/6106[conclusion].

View File

@@ -2,26 +2,6 @@
#include "crypto/kem.mpv"
#include "rosenpass/handshake_state.mpv"
fun Envelope(
key,
bits
): bits [data].
type mac_envelope_t.
fun mac_envelope(
key,
bits
) : mac_envelope_t.
fun mac_envelope2b(mac_envelope_t) : bits [typeConverter].
letfun create_mac(pk:kem_pk, payload:bits) = mac_envelope(lprf2(MAC, kem_pk2b(pk), payload), payload).
fun mac_envelope_pk_test(mac_envelope_t, kem_pk) : bool
reduc forall pk:kem_pk, b:bits;
mac_envelope_pk_test(mac_envelope(prf(prf(prf(prf(key0,PROTOCOL),MAC),kem_pk2b(pk)),
b), b), pk) = true.
type InitHello_t.
fun InitHello(
SessionId, // sidi
@@ -31,8 +11,6 @@ fun InitHello(
bits // auth
) : InitHello_t [data].
fun IH2b(InitHello_t) : bitstring [typeConverter].
#define INITHELLO_PRODUCE() \
ck <- lprf1(CK_INIT, kem_pk2b(spkr)); /* IHI1 */ \
/* not handled here */ /* IHI2 */ \
@@ -63,9 +41,7 @@ fun RespHello(
bits // auth
) : RespHello_t [data].
fun RH2b(RespHello_t) : bitstring [typeConverter].
#define RESPHELLO_PRODUCE() \
#define RESPHELLO_PRODUCE() \
/* not handled here */ /* RHR1 */ \
MIX2(sid2b(sidr), sid2b(sidi)) /* RHR3 */ \
ENCAPS_AND_MIX(ecti, epki, epti) /* RHR4 */ \
@@ -91,14 +67,13 @@ fun InitConf(
bits // auth
) : InitConf_t [data].
fun IC2b(InitConf_t) : bitstring [typeConverter].
#define INITCONF_PRODUCE() \
MIX2(sid2b(sidi), sid2b(sidr)) /* ICI3 */ \
ENCRYPT_AND_MIX(auth, empty) /* ICI4 */ \
ic <- InitConf(sidi, sidr, biscuit, auth);
#define INITCONF_CONSUME() \
let InitConf(sidi, sidr, biscuit, auth) = ic in \
LOAD_BISCUIT(biscuit_no, biscuit) /* ICR1 */ \
ENCRYPT_AND_MIX(rh_auth, empty) /* ICIR */ \
ck_rh <- ck; /* ---- */ /* TODO: Move into oracles.mpv */ \

View File

@@ -1,4 +1,4 @@
# Rosenpass internal libsodium bindings
# Rosenpass internal cryptographic traits
Rosenpass internal library providing traits for cryptographic primitives.

View File

@@ -33,8 +33,8 @@ pub fn hash<'a>(key: &'a [u8], data: &'a [u8]) -> impl To<[u8], anyhow::Result<(
// out the right way to use the imports while allowing for zeroization.
// An API based on slices might actually be simpler.
let mut tmp = Zeroizing::new([0u8; OUT_LEN]);
let mut tmp = GenericArray::from_mut_slice(tmp.as_mut());
h.finalize_into(&mut tmp);
let tmp = GenericArray::from_mut_slice(tmp.as_mut());
h.finalize_into(tmp);
copy_slice(tmp.as_ref()).to(out);
Ok(())

View File

@@ -21,7 +21,7 @@ pub fn encrypt(
let nonce = GenericArray::from_slice(nonce);
let (ct, mac) = ciphertext.split_at_mut(ciphertext.len() - TAG_LEN);
copy_slice(plaintext).to(ct);
let mac_value = AeadImpl::new_from_slice(key)?.encrypt_in_place_detached(&nonce, ad, ct)?;
let mac_value = AeadImpl::new_from_slice(key)?.encrypt_in_place_detached(nonce, ad, ct)?;
copy_slice(&mac_value[..]).to(mac);
Ok(())
}
@@ -38,6 +38,6 @@ pub fn decrypt(
let (ct, mac) = ciphertext.split_at(ciphertext.len() - TAG_LEN);
let tag = GenericArray::from_slice(mac);
copy_slice(ct).to(plaintext);
AeadImpl::new_from_slice(key)?.decrypt_in_place_detached(&nonce, ad, plaintext, tag)?;
AeadImpl::new_from_slice(key)?.decrypt_in_place_detached(nonce, ad, plaintext, tag)?;
Ok(())
}

View File

@@ -23,7 +23,7 @@ pub fn encrypt(
let (ct, mac) = ct_mac.split_at_mut(ct_mac.len() - TAG_LEN);
copy_slice(nonce).to(n);
copy_slice(plaintext).to(ct);
let mac_value = AeadImpl::new_from_slice(key)?.encrypt_in_place_detached(&nonce, ad, ct)?;
let mac_value = AeadImpl::new_from_slice(key)?.encrypt_in_place_detached(nonce, ad, ct)?;
copy_slice(&mac_value[..]).to(mac);
Ok(())
}
@@ -40,6 +40,6 @@ pub fn decrypt(
let nonce = GenericArray::from_slice(n);
let tag = GenericArray::from_slice(mac);
copy_slice(ct).to(plaintext);
AeadImpl::new_from_slice(key)?.decrypt_in_place_detached(&nonce, ad, plaintext, tag)?;
AeadImpl::new_from_slice(key)?.decrypt_in_place_detached(nonce, ad, plaintext, tag)?;
Ok(())
}

View File

@@ -1,3 +1,18 @@
use core::ptr;
/// Little endian memcmp version of quinier/memsec
/// https://github.com/quininer/memsec/blob/bbc647967ff6d20d6dccf1c85f5d9037fcadd3b0/src/lib.rs#L30
#[inline(never)]
pub unsafe fn memcmp_le(b1: *const u8, b2: *const u8, len: usize) -> i32 {
let mut res = 0;
for i in 0..len {
let diff =
i32::from(ptr::read_volatile(b1.add(i))) - i32::from(ptr::read_volatile(b2.add(i)));
res = (res & (((diff - 1) & !diff) >> 8)) | diff;
}
((res - 1) >> 8) + (res >> 8) + 1
}
/// compares two slices of memory content and returns an integer indicating the relationship between
/// the slices
///
@@ -20,5 +35,5 @@
#[inline]
pub fn compare(a: &[u8], b: &[u8]) -> i32 {
assert!(a.len() == b.len());
unsafe { memsec::memcmp(a.as_ptr(), b.as_ptr(), a.len()) }
unsafe { memcmp_le(a.as_ptr(), b.as_ptr(), a.len()) }
}

View File

@@ -16,8 +16,7 @@
/// see <https://github.com/rosenpass/rosenpass/issues/232>
#[inline]
pub fn memcmp(a: &[u8], b: &[u8]) -> bool {
a.len() == b.len()
&& unsafe { memsec::memeq(a.as_ptr() as *const u8, b.as_ptr() as *const u8, a.len()) }
a.len() == b.len() && unsafe { memsec::memeq(a.as_ptr(), b.as_ptr(), a.len()) }
}
#[cfg(all(test, feature = "constant_time_tests"))]

13
doc/check.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/usr/bin/env bash
# We have to filter this STYLE error out, because it is very platform specific
OUTPUT=$(mandoc -Tlint "$1" | grep --invert-match "STYLE: referenced manual not found")
if [ -z "$OUTPUT" ]
then
exit 0
else
echo "$1 is malformatted, check mandoc -Tlint $1"
echo "$OUTPUT"
exit 1
fi

38
flake.lock generated
View File

@@ -2,17 +2,15 @@
"nodes": {
"fenix": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"nixpkgs": ["nixpkgs"],
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1699770036,
"narHash": "sha256-bZmI7ytPAYLpyFNgj5xirDkKuAniOkj1xHdv5aIJ5GM=",
"lastModified": 1712298178,
"narHash": "sha256-590fpCPXYAkaAeBz/V91GX4/KGzPObdYtqsTWzT6AhI=",
"owner": "nix-community",
"repo": "fenix",
"rev": "81ab0b4f7ae9ebb57daa0edf119c4891806e4d3a",
"rev": "569b5b5781395da08e7064e825953c548c26af76",
"type": "github"
},
"original": {
@@ -26,11 +24,11 @@
"systems": "systems"
},
"locked": {
"lastModified": 1694529238,
"narHash": "sha256-zsNZZGTGnMOf9YpHKJqMSsa0dXbfmxeoJ7xHlrt+xmY=",
"lastModified": 1710146030,
"narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "ff7b65b44d01cf9ba6a71320833626af21126384",
"rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"type": "github"
},
"original": {
@@ -41,9 +39,7 @@
},
"naersk": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
"nixpkgs": ["nixpkgs"]
},
"locked": {
"lastModified": 1698420672,
@@ -61,16 +57,18 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1698846319,
"narHash": "sha256-4jyW/dqFBVpWFnhl0nvP6EN4lP7/ZqPxYRjl6var0Oc=",
"lastModified": 1712168706,
"narHash": "sha256-XP24tOobf6GGElMd0ux90FEBalUtw6NkBSVh/RlA6ik=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "34bdaaf1f0b7fb6d9091472edc968ff10a8c2857",
"rev": "1487bdea619e4a7a53a4590c475deabb5a9d1bfb",
"type": "github"
},
"original": {
"id": "nixpkgs",
"type": "indirect"
"owner": "NixOS",
"ref": "nixos-23.11",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
@@ -84,11 +82,11 @@
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1699715108,
"narHash": "sha256-yPozsobJU55gj+szgo4Lpcg1lHvGQYAT6Y4MrC80mWE=",
"lastModified": 1712156296,
"narHash": "sha256-St7ZQrkrr5lmQX9wC1ZJAFxL8W7alswnyZk9d1se3Us=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "5fcf5289e726785d20d3aa4d13d90a43ed248e83",
"rev": "8e581ac348e223488622f4d3003cb2bd412bf27e",
"type": "github"
},
"original": {

158
flake.nix
View File

@@ -1,5 +1,6 @@
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.11";
flake-utils.url = "github:numtide/flake-utils";
# for quicker rust builds
@@ -35,24 +36,6 @@
# normal nixpkgs
pkgs = import nixpkgs {
inherit system;
# TODO remove overlay once a fix for
# https://github.com/NixOS/nixpkgs/issues/216904 got merged
overlays = [
(
final: prev: {
iproute2 = prev.iproute2.overrideAttrs (old:
let
isStatic = prev.stdenv.hostPlatform.isStatic;
in
{
makeFlags = old.makeFlags ++ prev.lib.optional isStatic [
"TC_CONFIG_NO_XT=y"
];
});
}
)
];
};
# parsed Cargo.toml
@@ -89,17 +72,9 @@
result = pkgs.lib.sources.cleanSourceWith { inherit src filter; };
};
# builds a bin path for all dependencies for the `rp` shellscript
rpBinPath = p: with p; lib.makeBinPath [
coreutils
findutils
gawk
wireguard-tools
];
# a function to generate a nix derivation for rosenpass against any
# given set of nixpkgs
rpDerivation = p:
rosenpassDerivation = p:
let
# whether we want to build a statically linked binary
isStatic = p.targetPlatform.isStatic;
@@ -145,12 +120,10 @@
p.stdenv.cc
cmake # for oqs build in the oqs-sys crate
mandoc # for the built-in manual
makeWrapper # for the rp shellscript
pkg-config # let libsodium-sys-stable find libsodium
removeReferencesTo
rustPlatform.bindgenHook # for C-bindings in the crypto libs
];
buildInputs = with p; [ bash libsodium ];
buildInputs = with p; [ bash ];
override = x: {
preBuild =
@@ -177,11 +150,111 @@
preBuild = (lib.optionalString isStatic ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -lc"
'');
};
preInstall = ''
install -D ${./rp} $out/bin/rp
wrapProgram $out/bin/rp --prefix PATH : "${ rpBinPath p }"
'';
# We want to build for a specific target...
CARGO_BUILD_TARGET = target;
# ... which might require a non-default linker:
"CARGO_TARGET_${shout target}_LINKER" =
let
inherit (p.stdenv) cc;
in
"${cc}/bin/${cc.targetPrefix}cc";
meta = with pkgs.lib;
{
inherit (cargoToml.package) description homepage;
license = with licenses; [ mit asl20 ];
maintainers = [ maintainers.wucke13 ];
platforms = platforms.all;
};
} // (lib.mkIf isStatic {
# otherwise pkg-config tries to link non-existent dynamic libs
# documented here: https://docs.rs/pkg-config/latest/pkg_config/
PKG_CONFIG_ALL_STATIC = true;
# tell rust to build everything statically linked
CARGO_BUILD_RUSTFLAGS = "-C target-feature=+crt-static";
});
# a function to generate a nix derivation for the rp helper against any
# given set of nixpkgs
rpDerivation = p:
let
# whether we want to build a statically linked binary
isStatic = p.targetPlatform.isStatic;
# the rust target of `p`
target = p.rust.toRustTargetSpec p.targetPlatform;
# convert a string to shout case
shout = string: builtins.replaceStrings [ "-" ] [ "_" ] (pkgs.lib.toUpper string);
# suitable Rust toolchain
toolchain = with inputs.fenix.packages.${system}; combine [
stable.cargo
stable.rustc
targets.${target}.stable.rust-std
];
# naersk with a custom toolchain
naersk = pkgs.callPackage inputs.naersk {
cargo = toolchain;
rustc = toolchain;
};
# used to trick the build.rs into believing that CMake was ran **again**
fakecmake = pkgs.writeScriptBin "cmake" ''
#! ${pkgs.stdenv.shell} -e
true
'';
in
naersk.buildPackage
{
# metadata and source
name = cargoToml.package.name;
version = cargoToml.package.version;
inherit src;
cargoBuildOptions = x: x ++ [ "-p" "rp" ];
cargoTestOptions = x: x ++ [ "-p" "rp" ];
doCheck = true;
nativeBuildInputs = with pkgs; [
p.stdenv.cc
cmake # for oqs build in the oqs-sys crate
mandoc # for the built-in manual
removeReferencesTo
rustPlatform.bindgenHook # for C-bindings in the crypto libs
];
buildInputs = with p; [ bash ];
override = x: {
preBuild =
# nix defaults to building for aarch64 _without_ the armv8-a crypto
# extensions, but liboqs depens on these
(lib.optionalString (system == "aarch64-linux") ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -march=armv8-a+crypto"
''
);
# fortify is only compatible with dynamic linking
hardeningDisable = lib.optional isStatic "fortify";
};
overrideMain = x: {
# CMake detects that it was served a _foreign_ target dir, and CMake
# would be executed again upon the second build step of naersk.
# By adding our specially optimized CMake version, we reduce the cost
# of recompilation by 99 % while, while avoiding any CMake errors.
nativeBuildInputs = [ (lib.hiPrio fakecmake) ] ++ x.nativeBuildInputs;
# make sure that libc is linked, under musl this is not the case per
# default
preBuild = (lib.optionalString isStatic ''
NIX_CFLAGS_COMPILE="$NIX_CFLAGS_COMPILE -lc"
'');
};
# We want to build for a specific target...
@@ -223,7 +296,8 @@
rec {
packages = rec {
default = rosenpass;
rosenpass = rpDerivation pkgs;
rosenpass = rosenpassDerivation pkgs;
rp = rpDerivation pkgs;
rosenpass-oci-image = rosenpassOCI "rosenpass";
# derivation for the release
@@ -234,6 +308,10 @@
if pkgs.hostPlatform.isLinux then
packages.rosenpass-static
else packages.rosenpass;
rp =
if pkgs.hostPlatform.isLinux then
packages.rp-static
else packages.rp;
oci-image =
if pkgs.hostPlatform.isLinux then
packages.rosenpass-static-oci-image
@@ -242,14 +320,15 @@
pkgs.runCommandNoCC "lace-result" { }
''
mkdir {bin,$out}
cp ${./.}/rp bin/
tar -cvf $out/rosenpass-${system}-${version}.tar bin/rp \
-C ${package} bin/rosenpass
tar -cvf $out/rosenpass-${system}-${version}.tar \
-C ${package} bin/rosenpass \
-C ${rp} bin/rp
cp ${oci-image} \
$out/rosenpass-oci-image-${system}-${version}.tar.gz
'';
} // (if pkgs.stdenv.isLinux then rec {
rosenpass-static = rpDerivation pkgs.pkgsStatic;
rosenpass-static = rosenpassDerivation pkgs.pkgsStatic;
rp-static = rpDerivation pkgs.pkgsStatic;
rosenpass-static-oci-image = rosenpassOCI "rosenpass-static";
} else { });
}
@@ -336,6 +415,7 @@
cargo-release
clippy
nodePackages.prettier
nushell # for the .ci/gen-workflow-files.nu script
rustfmt
packages.proverif-patched
];

View File

@@ -48,13 +48,37 @@ test = false
doc = false
[[bin]]
name = "fuzz_box_secret_alloc"
path = "fuzz_targets/box_secret_alloc.rs"
name = "fuzz_box_secret_alloc_malloc"
path = "fuzz_targets/box_secret_alloc_malloc.rs"
test = false
doc = false
[[bin]]
name = "fuzz_vec_secret_alloc"
path = "fuzz_targets/vec_secret_alloc.rs"
name = "fuzz_vec_secret_alloc_malloc"
path = "fuzz_targets/vec_secret_alloc_malloc.rs"
test = false
doc = false
[[bin]]
name = "fuzz_box_secret_alloc_memfdsec"
path = "fuzz_targets/box_secret_alloc_memfdsec.rs"
test = false
doc = false
[[bin]]
name = "fuzz_vec_secret_alloc_memfdsec"
path = "fuzz_targets/vec_secret_alloc_memfdsec.rs"
test = false
doc = false
[[bin]]
name = "fuzz_box_secret_alloc_memfdsec_mallocfb"
path = "fuzz_targets/box_secret_alloc_memfdsec_mallocfb.rs"
test = false
doc = false
[[bin]]
name = "fuzz_vec_secret_alloc_memfdsec_mallocfb"
path = "fuzz_targets/vec_secret_alloc_memfdsec_mallocfb.rs"
test = false
doc = false

View File

@@ -0,0 +1,12 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
use rosenpass_secret_memory::alloc::secret_box;
use rosenpass_secret_memory::policy::*;
use std::sync::Once;
static ONCE: Once = Once::new();
fuzz_target!(|data: &[u8]| {
ONCE.call_once(secret_policy_use_only_malloc_secrets);
let _ = secret_box(data);
});

View File

@@ -0,0 +1,13 @@
#![no_main]
use libfuzzer_sys::fuzz_target;
use rosenpass_secret_memory::alloc::secret_box;
use rosenpass_secret_memory::policy::*;
use std::sync::Once;
static ONCE: Once = Once::new();
fuzz_target!(|data: &[u8]| {
ONCE.call_once(secret_policy_use_only_memfd_secrets);
let _ = secret_box(data);
});

View File

@@ -2,7 +2,12 @@
use libfuzzer_sys::fuzz_target;
use rosenpass_secret_memory::alloc::secret_box;
use rosenpass_secret_memory::policy::*;
use std::sync::Once;
static ONCE: Once = Once::new();
fuzz_target!(|data: &[u8]| {
ONCE.call_once(secret_policy_try_use_memfd_secrets);
let _ = secret_box(data);
});

View File

@@ -4,11 +4,17 @@ extern crate rosenpass;
use libfuzzer_sys::fuzz_target;
use rosenpass::protocol::CryptoServer;
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::StaticKem;
use rosenpass_secret_memory::policy::*;
use rosenpass_secret_memory::Secret;
use std::sync::Once;
static ONCE: Once = Once::new();
fuzz_target!(|rx_buf: &[u8]| {
let sk = Secret::from_slice(&[0; 13568]);
let pk = Secret::from_slice(&[0; 524160]);
ONCE.call_once(secret_policy_use_only_malloc_secrets);
let sk = Secret::from_slice(&[0; StaticKem::SK_LEN]);
let pk = Secret::from_slice(&[0; StaticKem::PK_LEN]);
let mut cs = CryptoServer::new(sk, pk);
let mut tx_buf = [0; 10240];

View File

@@ -9,12 +9,12 @@ use rosenpass_ciphers::kem::EphemeralKem;
#[derive(arbitrary::Arbitrary, Debug)]
pub struct Input {
pub pk: [u8; 800],
pub pk: [u8; EphemeralKem::PK_LEN],
}
fuzz_target!(|input: Input| {
let mut ciphertext = [0u8; 768];
let mut shared_secret = [0u8; 32];
let mut ciphertext = [0u8; EphemeralKem::CT_LEN];
let mut shared_secret = [0u8; EphemeralKem::SK_LEN];
EphemeralKem::encaps(&mut shared_secret, &mut ciphertext, &input.pk).unwrap();
});

View File

@@ -7,8 +7,8 @@ use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::StaticKem;
fuzz_target!(|input: [u8; StaticKem::PK_LEN]| {
let mut ciphertext = [0u8; 188];
let mut shared_secret = [0u8; 32];
let mut ciphertext = [0u8; StaticKem::CT_LEN];
let mut shared_secret = [0u8; StaticKem::SHK_LEN];
// We expect errors while fuzzing therefore we do not check the result.
let _ = StaticKem::encaps(&mut shared_secret, &mut ciphertext, &input);

View File

@@ -0,0 +1,15 @@
#![no_main]
use std::sync::Once;
use libfuzzer_sys::fuzz_target;
use rosenpass_secret_memory::alloc::secret_vec;
use rosenpass_secret_memory::policy::*;
static ONCE: Once = Once::new();
fuzz_target!(|data: &[u8]| {
ONCE.call_once(secret_policy_use_only_malloc_secrets);
let mut vec = secret_vec();
vec.extend_from_slice(data);
});

View File

@@ -0,0 +1,15 @@
#![no_main]
use std::sync::Once;
use libfuzzer_sys::fuzz_target;
use rosenpass_secret_memory::alloc::secret_vec;
use rosenpass_secret_memory::policy::*;
static ONCE: Once = Once::new();
fuzz_target!(|data: &[u8]| {
ONCE.call_once(secret_policy_use_only_memfd_secrets);
let mut vec = secret_vec();
vec.extend_from_slice(data);
});

View File

@@ -1,9 +1,15 @@
#![no_main]
use std::sync::Once;
use libfuzzer_sys::fuzz_target;
use rosenpass_secret_memory::alloc::secret_vec;
use rosenpass_secret_memory::policy::*;
static ONCE: Once = Once::new();
fuzz_target!(|data: &[u8]| {
ONCE.call_once(secret_policy_try_use_memfd_secrets);
let mut vec = secret_vec();
vec.extend_from_slice(data);
});

View File

@@ -6,6 +6,7 @@ author:
- Benjamin Lipp = Max Planck Institute for Security and Privacy (MPI-SP)
- Wanja Zaeske
- Lisa Schmidt = {Scientific Illustrator \\url{mullana.de}}
- Prabhpreet Dua
abstract: |
Rosenpass is used to create post-quantum-secure VPNs. Rosenpass computes a shared key, WireGuard (WG) [@wg] uses the shared key to establish a secure connection. Rosenpass can also be used without WireGuard, deriving post-quantum-secure symmetric keys for another application. The Rosenpass protocol builds on “Post-quantum WireGuard” (PQWG) [@pqwg] and improves it by using a cookie mechanism to provide security against state disruption attacks.
@@ -218,6 +219,7 @@ The server needs to store the following variables:
* `spkm`
* `biscuit_key` Randomly chosen key used to encrypt biscuits
* `biscuit_ctr` Retransmission protection for biscuits
* `cookie_secret`- A randomized cookie secret to derive cookies sent to peer when under load. This secret changes every 120 seconds
Not mandated per se, but required in practice:
@@ -243,6 +245,7 @@ The initiator stores the following local state for each ongoing handshake:
* `ck` The chaining key
* `eski` The initiator's ephemeral secret key
* `epki` The initiator's ephemeral public key
* `cookie_value`- Cookie value sent by an initiator peer under load, used to compute cookie field in outgoing handshake to peer under load. This value expires 120 seconds from when a peer sends this value using the CookieReply message
The responder stores no state. While the responder has access to all of the above variables except for `eski`, the responder discards them after generating the RespHello message. Instead, the responder state is contained inside a cookie called a biscuit. This value is returned to the responder inside the InitConf packet. The biscuit consists of:
@@ -428,12 +431,92 @@ The responder code handling InitConf needs to deal with the biscuits and package
ICR5 and ICR6 perform biscuit replay protection using the biscuit number. This is not handled in `load_biscuit()` itself because there is the case that `biscuit_no = biscuit_used` which needs to be dealt with for retransmission handling.
### Denial of Service Mitigation and Cookies
Rosenpass derives its cookie-based DoS mitigation technique for a responder when receiving InitHello messages from Wireguard [@wg].
When the responder is under load, it may choose to not process further InitHello handshake messages, but instead to respond with a cookie reply message (see Figure \ref{img:MessageTypes}).
The sender of the exchange then uses this cookie in order to resend the message and have it accepted the following time by the reciever.
For an initiator, Rosenpass ignores all messages when under load.
#### Cookie Reply Message
The cookie reply message is sent by the responder on receiving an InitHello message when under load. It consists of the `sidi` of the initiator, a random 24-byte bitstring `nonce` and encrypting `cookie_value` into a `cookie_encrypted` reply field which consists of the following:
```pseudorust
cookie_value = lhash("cookie-value", cookie_secret, initiator_host_info)[0..16]
cookie_encrypted = XAEAD(lhash("cookie-key", spkm), nonce, cookie_value, mac_peer)
```
where `cookie_secret` is a secret variable that changes every two minutes to a random value. `initiator_host_info` is used to identify the initiator host, and is implementation-specific for the client. This paramaters used to identify the host must be carefully chosen to ensure there is a unique mapping, especially when using IPv4 and IPv6 addresses to identify the host (such as taking care of IPv6 link-local addresses). `cookie_value` is a truncated 16 byte value from the above hash operation. `mac_peer` is the `mac` field of the peer's handshake message to which message is the reply.
#### Envelope `mac` Field
Similar to `mac.1` in Wireguard handshake messages, the `mac` field of a Rosenpass envelope from a handshake packet sender's point of view consists of the following:
```pseudorust
mac = lhash("mac", spkt, MAC_WIRE_DATA)[0..16]
```
where `MAC_WIRE_DATA` represents all bytes of msg prior to `mac` field in the envelope.
If a client receives an invalid `mac` value for any message, it will discard the message.
#### Envelope cookie field
The initiator, on receiving a CookieReply message, decrypts `cookie_encrypted` and stores the `cookie_value` for the session into `peer[sid].cookie_value` for a limited time (120 seconds). This value is then used to set `cookie` field set for subsequent messages and retransmissions to the responder as follows:
```pseudorust
if (peer.cookie_value.is_none() || seconds_since_update(peer[sid].cookie_value) >= 120) {
cookie.zeroize(); //zeroed out 16 bytes bitstring
}
else {
cookie = lhash("cookie",peer.cookie_value.unwrap(),COOKIE_WIRE_DATA)
}
```
Here, `seconds_since_update(peer.cookie_value)` is the amount of time in seconds ellapsed since last cookie was received, and `COOKIE_WIRE_DATA` are the message contents of all bytes of the retransmitted message prior to the `cookie` field.
The inititator can use an invalid value for the `cookie` value, when the responder is not under load, and the responder must ignore this value.
However, when the responder is under load, it may reject InitHello messages with the invalid `cookie` value, and issue a cookie reply message.
### Conditions to trigger DoS Mechanism
This whitepaper does not mandate any specific mechanism to detect responder contention (also mentioned as the under load condition) that would trigger use of the cookie mechanism.
For the reference implemenation, Rosenpass has derived inspiration from the linux implementation of Wireguard. This implementation suggests that the reciever keep track of the number of messages it is processing at a given time.
On receiving an incoming message, if the length of the message queue to be processed exceeds a threshold `MAX_QUEUED_INCOMING_HANDSHAKES_THRESHOLD`, the client is considered under load and its state is stored as under load. In addition, the timestamp of this instant when the client was last under load is stored. When recieving subsequent messages, if the client is still in an under load state, the client will check if the time ellpased since the client was last under load has exceeded `LAST_UNDER_LOAD_WINDOW` seconds. If this is the case, the client will update its state to normal operation, and process the message in a normal fashion.
Currently, the following constants are derived from the Linux kernel implementation of Wireguard:
```pseudorust
MAX_QUEUED_INCOMING_HANDSHAKES_THRESHOLD = 4096
LAST_UNDER_LOAD_WINDOW = 1 //seconds
```
## Dealing with Packet Loss
The initiator deals with packet loss by storing the messages it sends to the responder and retransmitting them in randomized, exponentially increasing intervals until they get a response. Receiving RespHello terminates retransmission of InitHello. A Data or EmptyData message serves as acknowledgement of receiving InitConf and terminates its retransmission.
The responder does not need to do anything special to handle RespHello retransmission if the RespHello package is lost, the initiator retransmits InitHello and the responder can generate another RespHello package from that. InitConf retransmission needs to be handled specifically in the responder code because accepting an InitConf retransmission would reset the live session including the nonce counter, which would cause nonce reuse. Implementations must detect the case that `biscuit_no = biscuit_used` in ICR5, skip execution of ICR6 and ICR7, and just transmit another EmptyData package to confirm that the initiator can stop transmitting InitConf.
### Interaction with cookie reply system
The cookie reply system does not interfere with the retransmission logic discussed above.
When the initator is under load, it will ignore processing any incoming messages.
When a responder is under load and it receives an InitHello handshake message, the InitHello message will be discarded and a cookie reply message is sent. The initiator, then on the reciept of the cookie reply message, will store a decrypted `cookie_value` to set the `cookie` field to subsequently sent messages. As per the retransmission mechanism above, the initiator will send a retransmitted InitHello message with a valid `cookie` value appended. On receiving the retransmitted handshake message, the responder will validate the `cookie` value and resume with the handshake process.
When the responder is under load and it recieves an InitConf message, the message will be directly processed without checking the validity of the cookie field.
# Changelog
- Added section "Denial of Service Mitigation and Cookies", and modify "Dealing with Packet Loss" for DoS cookie mechanism
\printbibliography
\setupimage{landscape,fullpage,label=img:HandlingCode}

View File

@@ -25,11 +25,11 @@ Follow [quick start instructions](https://rosenpass.eu/#start) to get a VPN up a
## Software architecture
The [rosenpass tool](./src/) is written in Rust and uses liboqs[^liboqs] and libsodium[^libsodium]. The tool establishes a symmetric key and provides it to WireGuard. Since it supplies WireGuard with key through the PSK feature using Rosenpass+WireGuard is cryptographically no less secure than using WireGuard on its own ("hybrid security"). Rosenpass refreshes the symmetric key every two minutes.
The [rosenpass tool](./src/) is written in Rust and uses liboqs[^liboqs]. The tool establishes a symmetric key and provides it to WireGuard. Since it supplies WireGuard with key through the PSK feature using Rosenpass+WireGuard is cryptographically no less secure than using WireGuard on its own ("hybrid security"). Rosenpass refreshes the symmetric key every two minutes.
As with any application a small risk of critical security issues (such as buffer overflows, remote code execution) exists; the Rosenpass application is written in the Rust programming language which is much less prone to such issues. Rosenpass can also write keys to files instead of supplying them to WireGuard With a bit of scripting the stand alone mode of the implementation can be used to run the application in a Container, VM or on another host. This mode can also be used to integrate tools other than WireGuard with Rosenpass.
The [`rp`](./rp) tool written in bash makes it easy to create a VPN using WireGuard and Rosenpass.
The [`rp`](./rp) tool written in Rust makes it easy to create a VPN using WireGuard and Rosenpass.
`rp` is easy to get started with but has a few drawbacks; it runs as root, demanding access to both WireGuard
and Rosenpass private keys, takes control of the interface and works with exactly one interface. If you do not feel confident about running Rosenpass as root, you should use the stand-alone mode to create a more secure setup using containers, jails, or virtual machines.
@@ -59,7 +59,6 @@ The code uses a variety of optimizations to speed up analysis such as using secr
A wrapper script provides instant feedback about which queries execute as expected in color: A red cross if a query fails and a green check if it succeeds.
[^liboqs]: https://openquantumsafe.org/liboqs/
[^libsodium]: https://doc.libsodium.org/
[^wg]: https://www.wireguard.com/
[^pqwg]: https://eprint.iacr.org/2020/379
[^pqwg-statedis]: Unless supplied with a pre-shared-key, but this defeats the purpose of a key exchange protocol

View File

@@ -9,6 +9,10 @@ homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
[[bin]]
name = "rosenpass"
path = "src/main.rs"
[[bench]]
name = "handshake"
harness = false
@@ -34,6 +38,8 @@ mio = { workspace = true }
rand = { workspace = true }
zerocopy = { workspace = true }
home = { workspace = true }
derive_builder = {workspace = true}
rosenpass-wireguard-broker = {workspace = true}
[build-dependencies]
anyhow = { workspace = true }
@@ -42,3 +48,9 @@ anyhow = { workspace = true }
criterion = { workspace = true }
test_bin = { workspace = true }
stacker = { workspace = true }
serial_test = {workspace = true}
procspawn = {workspace = true}
[features]
enable_broker_api = ["rosenpass-wireguard-broker/enable_broker_api"]
enable_memfd_alloc = []

View File

@@ -5,6 +5,7 @@ use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::StaticKem;
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use rosenpass_secret_memory::secret_policy_try_use_memfd_secrets;
fn handle(
tx: &mut CryptoServer,
@@ -56,6 +57,7 @@ fn make_server_pair() -> Result<(CryptoServer, CryptoServer)> {
}
fn criterion_benchmark(c: &mut Criterion) {
secret_policy_try_use_memfd_secrets();
let (mut a, mut b) = make_server_pair().unwrap();
c.bench_function("cca_secret_alloc", |bench| {
bench.iter(|| {

View File

@@ -1,14 +1,21 @@
use anyhow::bail;
use anyhow::Result;
use log::{debug, error, info, warn};
use derive_builder::Builder;
use log::{error, info, warn};
use mio::Interest;
use mio::Token;
use rosenpass_util::file::fopen_w;
use rosenpass_secret_memory::Public;
use rosenpass_secret_memory::Secret;
use rosenpass_util::file::StoreValueB64;
use rosenpass_wireguard_broker::WireguardBrokerMio;
use rosenpass_wireguard_broker::{WireguardBrokerCfg, WG_KEY_LEN};
use zerocopy::AsBytes;
use std::cell::Cell;
use std::io::Write;
use std::collections::HashMap;
use std::fmt::Debug;
use std::io::ErrorKind;
use std::net::Ipv4Addr;
use std::net::Ipv6Addr;
@@ -17,22 +24,29 @@ use std::net::SocketAddrV4;
use std::net::SocketAddrV6;
use std::net::ToSocketAddrs;
use std::path::PathBuf;
use std::process::Command;
use std::process::Stdio;
use std::slice;
use std::thread;
use std::time::Duration;
use std::time::Instant;
use crate::protocol::HostIdentification;
use crate::{
config::Verbosity,
protocol::{CryptoServer, MsgBuf, PeerPtr, SPk, SSk, SymKey, Timing},
};
use rosenpass_util::attempt;
use rosenpass_util::b64::{b64_writer, fmt_b64};
use rosenpass_util::b64::B64Display;
const MAX_B64_KEY_SIZE: usize = 32 * 5 / 3;
const MAX_B64_PEER_ID_SIZE: usize = 32 * 5 / 3;
const IPV4_ANY_ADDR: Ipv4Addr = Ipv4Addr::new(0, 0, 0, 0);
const IPV6_ANY_ADDR: Ipv6Addr = Ipv6Addr::new(0, 0, 0, 0, 0, 0, 0, 0);
const UNDER_LOAD_RATIO: f64 = 0.5;
const DURATION_UPDATE_UNDER_LOAD_STATUS: Duration = Duration::from_millis(500);
const BROKER_ID_BYTES: usize = 8;
fn ipv4_any_binding() -> SocketAddr {
// addr, port
SocketAddr::V4(SocketAddrV4::new(IPV4_ANY_ADDR, 0))
@@ -43,10 +57,50 @@ fn ipv6_any_binding() -> SocketAddr {
SocketAddr::V6(SocketAddrV6::new(IPV6_ANY_ADDR, 0, 0, 0))
}
#[derive(Debug, Default)]
pub struct MioTokenDispenser {
counter: usize,
}
impl MioTokenDispenser {
fn dispense(&mut self) -> Token {
let r = self.counter;
self.counter += 1;
Token(r)
}
}
#[derive(Debug, Default)]
pub struct BrokerStore {
store: HashMap<
Public<BROKER_ID_BYTES>,
Box<dyn WireguardBrokerMio<Error = anyhow::Error, MioError = anyhow::Error>>,
>,
}
#[derive(Debug, Clone)]
pub struct BrokerStorePtr(pub Public<BROKER_ID_BYTES>);
#[derive(Debug)]
pub struct BrokerPeer {
ptr: BrokerStorePtr,
peer_cfg: Box<dyn WireguardBrokerCfg>,
}
impl BrokerPeer {
pub fn new(ptr: BrokerStorePtr, peer_cfg: Box<dyn WireguardBrokerCfg>) -> Self {
Self { ptr, peer_cfg }
}
pub fn ptr(&self) -> &BrokerStorePtr {
&self.ptr
}
}
#[derive(Default, Debug)]
pub struct AppPeer {
pub outfile: Option<PathBuf>,
pub outwg: Option<WireguardOut>, // TODO make this a generic command
pub broker_peer: Option<BrokerPeer>,
pub initial_endpoint: Option<Endpoint>,
pub current_endpoint: Option<Endpoint>,
}
@@ -67,6 +121,23 @@ pub struct WireguardOut {
pub extra_params: Vec<String>,
}
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum DoSOperation {
UnderLoad,
Normal,
}
/// Integration test helpers for AppServer
#[derive(Debug, Builder)]
#[builder(pattern = "owned")]
pub struct AppServerTest {
/// Enable DoS operation permanently
#[builder(default = "false")]
pub enable_dos_permanently: bool,
/// Terminate application signal
#[builder(default = "None")]
pub termination_handler: Option<std::sync::mpsc::Receiver<()>>,
}
/// Holds the state of the application, namely the external IO
///
/// Responsible for file IO, network IO
@@ -77,9 +148,17 @@ pub struct AppServer {
pub sockets: Vec<mio::net::UdpSocket>,
pub events: mio::Events,
pub mio_poll: mio::Poll,
pub mio_token_dispenser: MioTokenDispenser,
pub brokers: BrokerStore,
pub peers: Vec<AppPeer>,
pub verbosity: Verbosity,
pub all_sockets_drained: bool,
pub under_load: DoSOperation,
pub blocking_polls_count: usize,
pub non_blocking_polls_count: usize,
pub unpolled_count: usize,
pub last_update_time: Instant,
pub test_helpers: Option<AppServerTest>,
}
/// A socket pointer is an index assigned to a socket;
@@ -128,6 +207,17 @@ impl AppPeerPtr {
pub fn get_app_mut<'a>(&self, srv: &'a mut AppServer) -> &'a mut AppPeer {
&mut srv.peers[self.0]
}
pub fn set_psk(&self, server: &mut AppServer, psk: &Secret<WG_KEY_LEN>) -> anyhow::Result<()> {
if let Some(broker) = server.peers[self.0].broker_peer.as_ref() {
let config = broker.peer_cfg.create_config(psk);
let broker = server.brokers.store.get_mut(&broker.ptr().0).unwrap();
broker.set_psk(config)?;
} else if server.peers[self.0].outfile.is_none() {
log::warn!("No broker peer found for peer {}", self.0);
}
Ok(())
}
}
#[derive(Debug)]
@@ -162,13 +252,7 @@ pub enum Endpoint {
/// at the same time. It also would reply on the same port RespHello was
/// sent to when listening on multiple ports on the same interface. This
/// may be required for some arcane firewall setups.
SocketBoundAddress {
/// The socket the address can be reached under; this is generally
/// determined when we actually receive an RespHello message
socket: SocketPtr,
/// Just the address
addr: SocketAddr,
},
SocketBoundAddress(SocketBoundEndpoint),
// A host name or IP address; storing the hostname here instead of an
// ip address makes sure that we look up the host name whenever we try
// to make a connection; this may be beneficial in some setups where a host-name
@@ -176,6 +260,85 @@ pub enum Endpoint {
Discovery(HostPathDiscoveryEndpoint),
}
impl std::fmt::Display for Endpoint {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Endpoint::SocketBoundAddress(host) => write!(f, "{}", host),
Endpoint::Discovery(host) => write!(f, "{}", host),
}
}
}
#[derive(Debug)]
pub struct SocketBoundEndpoint {
/// The socket the address can be reached under; this is generally
/// determined when we actually receive an RespHello message
socket: SocketPtr,
/// Just the address
addr: SocketAddr,
/// identifier
bytes: (usize, [u8; SocketBoundEndpoint::BUFFER_SIZE]),
}
impl std::fmt::Display for SocketBoundEndpoint {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.addr)
}
}
impl SocketBoundEndpoint {
const SOCKET_SIZE: usize = usize::BITS as usize / 8;
const IPV6_SIZE: usize = 16;
const PORT_SIZE: usize = 2;
const SCOPE_ID_SIZE: usize = 4;
const BUFFER_SIZE: usize = SocketBoundEndpoint::SOCKET_SIZE
+ SocketBoundEndpoint::IPV6_SIZE
+ SocketBoundEndpoint::PORT_SIZE
+ SocketBoundEndpoint::SCOPE_ID_SIZE;
pub fn new(socket: SocketPtr, addr: SocketAddr) -> Self {
let bytes = Self::to_bytes(&socket, &addr);
Self {
socket,
addr,
bytes,
}
}
fn to_bytes(
socket: &SocketPtr,
addr: &SocketAddr,
) -> (usize, [u8; SocketBoundEndpoint::BUFFER_SIZE]) {
let mut buf = [0u8; SocketBoundEndpoint::BUFFER_SIZE];
let addr = match addr {
SocketAddr::V4(addr) => {
//Map IPv4-mapped to IPv6 addresses
let ip = addr.ip().to_ipv6_mapped();
SocketAddrV6::new(ip, addr.port(), 0, 0)
}
SocketAddr::V6(addr) => *addr,
};
let mut len: usize = 0;
buf[len..len + SocketBoundEndpoint::SOCKET_SIZE].copy_from_slice(&socket.0.to_be_bytes());
len += SocketBoundEndpoint::SOCKET_SIZE;
buf[len..len + SocketBoundEndpoint::IPV6_SIZE].copy_from_slice(&addr.ip().octets());
len += SocketBoundEndpoint::IPV6_SIZE;
buf[len..len + SocketBoundEndpoint::PORT_SIZE].copy_from_slice(&addr.port().to_be_bytes());
len += SocketBoundEndpoint::PORT_SIZE;
buf[len..len + SocketBoundEndpoint::SCOPE_ID_SIZE]
.copy_from_slice(&addr.scope_id().to_be_bytes());
len += SocketBoundEndpoint::SCOPE_ID_SIZE;
(len, buf)
}
}
impl HostIdentification for SocketBoundEndpoint {
fn encode(&self) -> &[u8] {
&self.bytes.1[0..self.bytes.0]
}
}
impl Endpoint {
/// Start discovery from some addresses
pub fn discovery_from_addresses(addresses: Vec<SocketAddr>) -> Self {
@@ -216,7 +379,7 @@ impl Endpoint {
pub fn send(&self, srv: &AppServer, buf: &[u8]) -> anyhow::Result<()> {
use Endpoint::*;
match self {
SocketBoundAddress { socket, addr } => socket.send_to(srv, buf, *addr),
SocketBoundAddress(host) => host.socket.send_to(srv, buf, host.addr),
Discovery(host) => host.send_scouting(srv, buf),
}
}
@@ -224,7 +387,7 @@ impl Endpoint {
fn addresses(&self) -> &[SocketAddr] {
use Endpoint::*;
match self {
SocketBoundAddress { addr, .. } => slice::from_ref(addr),
SocketBoundAddress(host) => slice::from_ref(&host.addr),
Discovery(host) => host.addresses(),
}
}
@@ -262,6 +425,12 @@ pub struct HostPathDiscoveryEndpoint {
addresses: Vec<SocketAddr>,
}
impl std::fmt::Display for HostPathDiscoveryEndpoint {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:?}", self.addresses)
}
}
impl HostPathDiscoveryEndpoint {
pub fn from_addresses(addresses: Vec<SocketAddr>) -> Self {
let scouting_state = Cell::new((0, 0));
@@ -327,7 +496,7 @@ impl HostPathDiscoveryEndpoint {
.to_string()
.starts_with("Address family not supported by protocol");
if !ignore {
warn!("Socket #{} refusing to send to {}: ", sock_no, addr);
warn!("Socket #{} refusing to send to {}: {}", sock_no, addr, err);
}
}
}
@@ -342,10 +511,12 @@ impl AppServer {
pk: SPk,
addrs: Vec<SocketAddr>,
verbosity: Verbosity,
test_helpers: Option<AppServerTest>,
) -> anyhow::Result<Self> {
// setup mio
let mio_poll = mio::Poll::new()?;
let events = mio::Events::with_capacity(8);
let events = mio::Events::with_capacity(20);
let mut mio_token_dispenser = MioTokenDispenser::default();
// bind each SocketAddr to a socket
let maybe_sockets: Result<Vec<_>, _> =
@@ -419,10 +590,12 @@ impl AppServer {
}
// register all sockets to mio
for (i, socket) in sockets.iter_mut().enumerate() {
mio_poll
.registry()
.register(socket, Token(i), Interest::READABLE)?;
for socket in sockets.iter_mut() {
mio_poll.registry().register(
socket,
mio_token_dispenser.dispense(),
Interest::READABLE,
)?;
}
// TODO use mio::net::UnixStream together with std::os::unix::net::UnixStream for Linux
@@ -434,7 +607,15 @@ impl AppServer {
sockets,
events,
mio_poll,
mio_token_dispenser,
brokers: BrokerStore::default(),
all_sockets_drained: false,
under_load: DoSOperation::Normal,
blocking_polls_count: 0,
non_blocking_polls_count: 0,
unpolled_count: 0,
last_update_time: Instant::now(),
test_helpers,
})
}
@@ -442,12 +623,50 @@ impl AppServer {
matches!(self.verbosity, Verbosity::Verbose)
}
pub fn register_broker(
&mut self,
broker: Box<dyn WireguardBrokerMio<Error = anyhow::Error, MioError = anyhow::Error>>,
) -> Result<BrokerStorePtr> {
let ptr = Public::from_slice((self.brokers.store.len() as u64).as_bytes());
if self.brokers.store.insert(ptr, broker).is_some() {
bail!("Broker already registered");
}
//Register broker
self.brokers
.store
.get_mut(&ptr)
.ok_or(anyhow::format_err!("Broker wasn't added to registry"))?
.register(
self.mio_poll.registry(),
self.mio_token_dispenser.dispense(),
)?;
Ok(BrokerStorePtr(ptr))
}
pub fn unregister_broker(&mut self, ptr: BrokerStorePtr) -> Result<()> {
//Unregister broker
self.brokers
.store
.get_mut(&ptr.0)
.ok_or_else(|| anyhow::anyhow!("Broker not found"))?
.unregister(self.mio_poll.registry())?;
//Remove broker from store
self.brokers
.store
.remove(&ptr.0)
.ok_or_else(|| anyhow::anyhow!("Broker not found"))?;
Ok(())
}
pub fn add_peer(
&mut self,
psk: Option<SymKey>,
pk: SPk,
outfile: Option<PathBuf>,
outwg: Option<WireguardOut>,
broker_peer: Option<BrokerPeer>,
hostname: Option<String>,
) -> anyhow::Result<AppPeerPtr> {
let PeerPtr(pn) = self.crypt.add_peer(psk, pk)?;
@@ -458,7 +677,7 @@ impl AppServer {
let current_endpoint = None;
self.peers.push(AppPeer {
outfile,
outwg,
broker_peer,
initial_endpoint,
current_endpoint,
});
@@ -525,6 +744,17 @@ impl AppServer {
use crate::protocol::HandleMsgResult;
use AppPollResult::*;
use KeyOutputReason::*;
if let Some(AppServerTest {
termination_handler: Some(terminate),
..
}) = &self.test_helpers
{
if terminate.try_recv().is_ok() {
return Ok(());
}
}
match self.poll(&mut *rx)? {
#[allow(clippy::redundant_closure_call)]
SendInitiation(peer) => tx_maybe_with!(peer, || self
@@ -549,11 +779,17 @@ impl AppServer {
}
ReceivedMessage(len, endpoint) => {
match self.crypt.handle_msg(&rx[..len], &mut *tx) {
let msg_result = match self.under_load {
DoSOperation::UnderLoad => {
self.handle_msg_under_load(&endpoint, &rx[..len], &mut *tx)
}
DoSOperation::Normal => self.crypt.handle_msg(&rx[..len], &mut *tx),
};
match msg_result {
Err(ref e) => {
self.verbose().then(|| {
info!(
"error processing incoming message from {:?}: {:?} {}",
"error processing incoming message from {}: {:?} {}",
endpoint,
e,
e.backtrace()
@@ -584,23 +820,40 @@ impl AppServer {
}
}
fn handle_msg_under_load(
&mut self,
endpoint: &Endpoint,
rx: &[u8],
tx: &mut [u8],
) -> Result<crate::protocol::HandleMsgResult> {
match endpoint {
Endpoint::SocketBoundAddress(socket) => {
self.crypt.handle_msg_under_load(rx, &mut *tx, socket)
}
Endpoint::Discovery(_) => {
anyhow::bail!("Host-path discovery is not supported under load")
}
}
}
pub fn output_key(
&self,
&mut self,
peer: AppPeerPtr,
why: KeyOutputReason,
key: &SymKey,
) -> anyhow::Result<()> {
let peerid = peer.lower().get(&self.crypt).pidt()?;
let ap = peer.get_app(self);
if self.verbose() {
let msg = match why {
KeyOutputReason::Exchanged => "Exchanged key with peer",
KeyOutputReason::Stale => "Erasing outdated key from peer",
};
info!("{} {}", msg, fmt_b64(&*peerid));
info!("{} {}", msg, peerid.fmt_b64::<MAX_B64_PEER_ID_SIZE>());
}
let ap = peer.get_app(self);
if let Some(of) = ap.outfile.as_ref() {
// This might leave some fragments of the secret on the stack;
// in practice this is likely not a problem because the stack likely
@@ -609,7 +862,7 @@ impl AppServer {
// data will linger in the linux page cache anyways with the current
// implementation, going to great length to erase the secret here is
// not worth it right now.
b64_writer(fopen_w(of)?).write_all(key.secret())?;
key.store_b64::<MAX_B64_KEY_SIZE, _>(of)?;
let why = match why {
KeyOutputReason::Exchanged => "exchanged",
KeyOutputReason::Stale => "stale",
@@ -619,37 +872,11 @@ impl AppServer {
// it is meant to allow external detection of a successful key-exchange
println!(
"output-key peer {} key-file {of:?} {why}",
fmt_b64(&*peerid)
peerid.fmt_b64::<MAX_B64_PEER_ID_SIZE>()
);
}
if let Some(owg) = ap.outwg.as_ref() {
let mut child = Command::new("wg")
.arg("set")
.arg(&owg.dev)
.arg("peer")
.arg(&owg.pk)
.arg("preshared-key")
.arg("/dev/stdin")
.stdin(Stdio::piped())
.args(&owg.extra_params)
.spawn()?;
b64_writer(child.stdin.take().unwrap()).write_all(key.secret())?;
thread::spawn(move || {
let status = child.wait();
if let Ok(status) = status {
if status.success() {
debug!("successfully passed psk to wg")
} else {
error!("could not pass psk to wg {:?}", status)
}
} else {
error!("wait failed: {:?}", status)
}
});
}
peer.set_psk(self, key)?;
Ok(())
}
@@ -706,9 +933,56 @@ impl AppServer {
// only poll if we drained all sockets before
if self.all_sockets_drained {
self.mio_poll.poll(&mut self.events, Some(timeout))?;
//Non blocked polling
self.mio_poll
.poll(&mut self.events, Some(Duration::from_secs(0)))?;
if self.events.iter().peekable().peek().is_none() {
// if there are no events, then add to blocking poll count
self.blocking_polls_count += 1;
//Execute blocking poll
self.mio_poll.poll(&mut self.events, Some(timeout))?;
} else {
self.non_blocking_polls_count += 1;
}
} else {
self.unpolled_count += 1;
}
if let Some(AppServerTest {
enable_dos_permanently: true,
..
}) = self.test_helpers
{
self.under_load = DoSOperation::UnderLoad;
} else {
//Reset blocking poll count if waiting for more than BLOCKING_POLL_COUNT_DURATION
if self.last_update_time.elapsed() > DURATION_UPDATE_UNDER_LOAD_STATUS {
self.last_update_time = Instant::now();
let total_polls = self.blocking_polls_count + self.non_blocking_polls_count;
let load_ratio = if total_polls > 0 {
self.non_blocking_polls_count as f64 / total_polls as f64
} else if self.unpolled_count > 0 {
//There are no polls, so we are under load
1.0
} else {
0.0
};
if load_ratio > UNDER_LOAD_RATIO {
self.under_load = DoSOperation::UnderLoad;
} else {
self.under_load = DoSOperation::Normal;
}
self.blocking_polls_count = 0;
self.non_blocking_polls_count = 0;
self.unpolled_count = 0;
}
}
// drain all sockets
let mut would_block_count = 0;
for (sock_no, socket) in self.sockets.iter_mut().enumerate() {
match socket.recv_from(buf) {
@@ -717,10 +991,10 @@ impl AppServer {
self.all_sockets_drained = false;
return Ok(Some((
n,
Endpoint::SocketBoundAddress {
socket: SocketPtr(sock_no),
Endpoint::SocketBoundAddress(SocketBoundEndpoint::new(
SocketPtr(sock_no),
addr,
},
)),
)));
}
Err(e) if e.kind() == ErrorKind::WouldBlock => {
@@ -734,6 +1008,11 @@ impl AppServer {
// if each socket returned WouldBlock, then we drained them all at least once indeed
self.all_sockets_drained = would_block_count == self.sockets.len();
// Process brokers poll
for (_, broker) in self.brokers.store.iter_mut() {
broker.process_poll()?;
}
Ok(None)
}
}

View File

@@ -3,11 +3,17 @@ use clap::{Parser, Subcommand};
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::StaticKem;
use rosenpass_secret_memory::file::StoreSecret;
use rosenpass_secret_memory::{
secret_policy_try_use_memfd_secrets, secret_policy_use_only_malloc_secrets,
};
use rosenpass_util::file::{LoadValue, LoadValueB64};
use rosenpass_wireguard_broker::brokers::native_unix::{
NativeUnixBroker, NativeUnixBrokerConfigBaseBuilder, NativeUnixBrokerConfigBaseBuilderError,
};
use std::path::PathBuf;
use crate::app_server;
use crate::app_server::AppServer;
use crate::app_server::AppServerTest;
use crate::app_server::{AppServer, BrokerPeer};
use crate::protocol::{SPk, SSk, SymKey};
use super::config;
@@ -150,7 +156,14 @@ impl CliCommand {
///
/// ## TODO
/// - This method consumes the [`CliCommand`] value. It might be wise to use a reference...
pub fn run(self) -> anyhow::Result<()> {
pub fn run(self, test_helpers: Option<AppServerTest>) -> anyhow::Result<()> {
//Specify secret policy
#[cfg(feature = "enable_memfd_alloc")]
secret_policy_try_use_memfd_secrets();
#[cfg(not(feature = "enable_memfd_alloc"))]
secret_policy_use_only_malloc_secrets();
use CliCommand::*;
match self {
Man => {
@@ -257,7 +270,7 @@ impl CliCommand {
let config = config::Rosenpass::load(config_file)?;
config.validate()?;
Self::event_loop(config)?;
Self::event_loop(config, test_helpers)?;
}
Exchange {
@@ -274,7 +287,7 @@ impl CliCommand {
config.config_file_path = p;
}
config.validate()?;
Self::event_loop(config)?;
Self::event_loop(config, test_helpers)?;
}
Validate { config_files } => {
@@ -296,7 +309,11 @@ impl CliCommand {
Ok(())
}
fn event_loop(config: config::Rosenpass) -> anyhow::Result<()> {
fn event_loop(
config: config::Rosenpass,
test_helpers: Option<AppServerTest>,
) -> anyhow::Result<()> {
const MAX_PSK_SIZE: usize = 1000;
// load own keys
let sk = SSk::load(&config.secret_key)?;
let pk = SPk::load(&config.public_key)?;
@@ -307,19 +324,40 @@ impl CliCommand {
pk,
config.listen,
config.verbosity,
test_helpers,
)?);
let broker_store_ptr = srv.register_broker(Box::new(NativeUnixBroker::new()))?;
fn cfg_err_map(e: NativeUnixBrokerConfigBaseBuilderError) -> anyhow::Error {
anyhow::Error::msg(format!("NativeUnixBrokerConfigBaseBuilderError: {:?}", e))
}
for cfg_peer in config.peers {
let broker_peer = if let Some(wg) = &cfg_peer.wg {
let peer_cfg = NativeUnixBrokerConfigBaseBuilder::default()
.peer_id_b64(&wg.peer)?
.interface(wg.device.clone())
.extra_params_ser(&wg.extra_params)?
.build()
.map_err(cfg_err_map)?;
let broker_peer = BrokerPeer::new(broker_store_ptr.clone(), Box::new(peer_cfg));
Some(broker_peer)
} else {
None
};
srv.add_peer(
// psk, pk, outfile, outwg, tx_addr
cfg_peer.pre_shared_key.map(SymKey::load_b64).transpose()?,
cfg_peer
.pre_shared_key
.map(SymKey::load_b64::<MAX_PSK_SIZE, _>)
.transpose()?,
SPk::load(&cfg_peer.public_key)?,
cfg_peer.key_out,
cfg_peer.wg.map(|cfg| app_server::WireguardOut {
dev: cfg.device,
pk: cfg.peer,
extra_params: cfg.extra_params,
}),
broker_peer,
cfg_peer.endpoint.clone(),
)?;
}
@@ -334,5 +372,5 @@ fn generate_and_save_keypair(secret_key: PathBuf, public_key: PathBuf) -> anyhow
let mut spk = crate::protocol::SPk::random();
StaticKem::keygen(ssk.secret_mut(), spk.secret_mut())?;
ssk.store_secret(secret_key)?;
spk.store_secret(public_key)
spk.store(public_key)
}

View File

@@ -16,7 +16,7 @@ use std::{
};
use anyhow::{bail, ensure};
use rosenpass_util::file::fopen_w;
use rosenpass_util::file::{fopen_w, Visibility};
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize)]
@@ -135,7 +135,7 @@ impl Rosenpass {
}
// add path to "self"
config.config_file_path = p.as_ref().to_owned();
p.as_ref().clone_into(&mut config.config_file_path);
// return
Ok(config)
@@ -151,7 +151,7 @@ impl Rosenpass {
/// Commit the configuration to where it came from, overwriting the original file
pub fn commit(&self) -> anyhow::Result<()> {
let mut f = fopen_w(&self.config_file_path)?;
let mut f = fopen_w(&self.config_file_path, Visibility::Public)?;
f.write_all(toml::to_string_pretty(&self)?.as_bytes())?;
self.store(&self.config_file_path)
@@ -448,9 +448,8 @@ impl Default for Verbosity {
#[cfg(test)]
mod test {
use std::net::IpAddr;
use super::*;
use std::net::IpAddr;
fn split_str(s: &str) -> Vec<String> {
s.split(' ').map(|s| s.to_string()).collect()

View File

@@ -31,6 +31,8 @@ pub fn protocol() -> Result<HashDomain> {
hash_domain_ns!(protocol, mac, "mac");
hash_domain_ns!(protocol, cookie, "cookie");
hash_domain_ns!(protocol, cookie_value, "cookie-value");
hash_domain_ns!(protocol, cookie_key, "cookie-key");
hash_domain_ns!(protocol, peerid, "peer id");
hash_domain_ns!(protocol, biscuit_ad, "biscuit additional data");
hash_domain_ns!(protocol, ckinit, "chaining key init");

View File

@@ -26,7 +26,7 @@ pub fn main() {
// error!("error dummy");
}
match args.command.run() {
match args.command.run(None) {
Ok(_) => {}
Err(e) => {
error!("{e}");

View File

@@ -7,13 +7,19 @@
//! to the concept of lenses in function programming; more on that here:
//! [https://sinusoid.es/misc/lager/lenses.pdf](https://sinusoid.es/misc/lager/lenses.pdf)
//! To achieve this we utilize the zerocopy library.
//!
use std::mem::size_of;
use zerocopy::{AsBytes, FromBytes, FromZeroes};
use super::RosenpassError;
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::{EphemeralKem, StaticKem};
use rosenpass_ciphers::{aead, xaead, KEY_LEN};
use std::mem::size_of;
use zerocopy::{AsBytes, FromBytes, FromZeroes};
pub const MSG_SIZE_LEN: usize = 1;
pub const RESERVED_LEN: usize = 3;
pub const MAC_SIZE: usize = 16;
pub const COOKIE_SIZE: usize = 16;
pub const SID_LEN: usize = 4;
#[repr(packed)]
#[derive(AsBytes, FromBytes, FromZeroes)]
@@ -104,10 +110,24 @@ pub struct DataMsg {
pub dummy: [u8; 4],
}
#[repr(packed)]
#[derive(AsBytes, FromBytes, FromZeroes)]
pub struct CookieReplyInner {
/// [MsgType] of this message
pub msg_type: u8,
/// Reserved for future use
pub reserved: [u8; 3],
/// Session ID of the sender (initiator)
pub sid: [u8; 4],
/// Encrypted cookie with authenticated initiator `mac`
pub cookie_encrypted: [u8; xaead::NONCE_LEN + COOKIE_SIZE + xaead::TAG_LEN],
}
#[repr(packed)]
#[derive(AsBytes, FromBytes, FromZeroes)]
pub struct CookieReply {
pub dummy: [u8; 4],
pub inner: CookieReplyInner,
pub padding: [u8; size_of::<Envelope<InitHello>>() - size_of::<CookieReplyInner>()],
}
// Traits /////////////////////////////////////////////////////////////////////
@@ -156,6 +176,12 @@ impl TryFrom<u8> for MsgType {
}
}
impl From<MsgType> for u8 {
fn from(val: MsgType) -> Self {
val as u8
}
}
/// length in bytes of an unencrypted Biscuit (plain text)
pub const BISCUIT_PT_LEN: usize = size_of::<Biscuit>();

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,17 @@
use std::{fs, net::UdpSocket, path::PathBuf, process::Stdio, time::Duration};
use std::{
fs,
net::UdpSocket,
path::PathBuf,
sync::{Arc, Mutex},
time::Duration,
};
use clap::Parser;
use rosenpass::{app_server::AppServerTestBuilder, cli::CliArgs};
use rosenpass_secret_memory::{Public, Secret};
use rosenpass_wireguard_broker::{WireguardBrokerMio, WG_KEY_LEN, WG_PEER_LEN};
use serial_test::serial;
use std::io::Write;
const BIN: &str = "rosenpass";
@@ -28,26 +41,29 @@ fn generate_keys() {
fs::remove_dir_all(&tmpdir).unwrap();
}
fn find_udp_socket() -> u16 {
for port in 1025..=u16::MAX {
if UdpSocket::bind(("127.0.0.1", port)).is_ok() {
return port;
}
}
panic!("no free UDP port found");
fn find_udp_socket() -> Option<u16> {
(1025..=u16::MAX).find(|&port| UdpSocket::bind(("::1", port)).is_ok())
}
// check that we can exchange keys
#[test]
fn check_exchange() {
let tmpdir = PathBuf::from(env!("CARGO_TARGET_TMPDIR")).join("exchange");
fs::create_dir_all(&tmpdir).unwrap();
fn setup_logging() {
let mut log_builder = env_logger::Builder::from_default_env(); // sets log level filter from environment (or defaults)
log_builder.filter_level(log::LevelFilter::Debug);
log_builder.format_timestamp_nanos();
log_builder.format(|buf, record| {
let ts_format = buf.timestamp_nanos().to_string();
writeln!(
buf,
"\x1b[1m{:?}\x1b[0m {}: {}",
std::thread::current().id(),
&ts_format[14..],
record.args()
)
});
let secret_key_paths = [tmpdir.join("secret-key-0"), tmpdir.join("secret-key-1")];
let public_key_paths = [tmpdir.join("public-key-0"), tmpdir.join("public-key-1")];
let shared_key_paths = [tmpdir.join("shared-key-0"), tmpdir.join("shared-key-1")];
let _ = log_builder.try_init();
}
// generate key pairs
fn generate_key_pairs(secret_key_paths: &[PathBuf], public_key_paths: &[PathBuf]) {
for (secret_key_path, pub_key_path) in secret_key_paths.iter().zip(public_key_paths.iter()) {
let output = test_bin::get_test_bin(BIN)
.args(["gen-keys", "--secret-key"])
@@ -61,11 +77,86 @@ fn check_exchange() {
assert!(secret_key_path.is_file());
assert!(pub_key_path.is_file());
}
}
fn run_server_client_exchange(
(server_cmd, server_test_builder): (&std::process::Command, AppServerTestBuilder),
(client_cmd, client_test_builder): (&std::process::Command, AppServerTestBuilder),
) {
let (server_terminate, server_terminate_rx) = std::sync::mpsc::channel();
let (client_terminate, client_terminate_rx) = std::sync::mpsc::channel();
let cli = CliArgs::try_parse_from(
[server_cmd.get_program()]
.into_iter()
.chain(server_cmd.get_args()),
)
.unwrap();
std::thread::spawn(move || {
cli.command
.run(Some(
server_test_builder
.termination_handler(Some(server_terminate_rx))
.build()
.unwrap(),
))
.unwrap();
});
let cli = CliArgs::try_parse_from(
[client_cmd.get_program()]
.into_iter()
.chain(client_cmd.get_args()),
)
.unwrap();
std::thread::spawn(move || {
cli.command
.run(Some(
client_test_builder
.termination_handler(Some(client_terminate_rx))
.build()
.unwrap(),
))
.unwrap();
});
// give them some time to do the key exchange under load
std::thread::sleep(Duration::from_secs(10));
// time's up, kill the childs
server_terminate.send(()).unwrap();
client_terminate.send(()).unwrap();
}
// check that we can exchange keys
#[test]
#[serial]
fn check_exchange_under_normal() {
setup_logging();
let tmpdir = PathBuf::from(env!("CARGO_TARGET_TMPDIR")).join("exchange");
fs::create_dir_all(&tmpdir).unwrap();
let secret_key_paths = [tmpdir.join("secret-key-0"), tmpdir.join("secret-key-1")];
let public_key_paths = [tmpdir.join("public-key-0"), tmpdir.join("public-key-1")];
let shared_key_paths = [tmpdir.join("shared-key-0"), tmpdir.join("shared-key-1")];
// generate key pairs
generate_key_pairs(&secret_key_paths, &public_key_paths);
// start first process, the server
let port = find_udp_socket();
let listen_addr = format!("localhost:{port}");
let mut server = test_bin::get_test_bin(BIN)
let port = loop {
if let Some(port) = find_udp_socket() {
break port;
}
};
let listen_addr = format!("::1:{port}");
let mut server_cmd = std::process::Command::new(BIN);
server_cmd
.args(["exchange", "secret-key"])
.arg(&secret_key_paths[0])
.arg("public-key")
@@ -73,16 +164,12 @@ fn check_exchange() {
.args(["listen", &listen_addr, "verbose", "peer", "public-key"])
.arg(&public_key_paths[1])
.arg("outfile")
.arg(&shared_key_paths[0])
.stdout(Stdio::null())
.stderr(Stdio::null())
.spawn()
.expect("Failed to start {BIN}");
.arg(&shared_key_paths[0]);
std::thread::sleep(Duration::from_millis(500));
let server_test_builder = AppServerTestBuilder::default();
// start second process, the client
let mut client = test_bin::get_test_bin(BIN)
let mut client_cmd = std::process::Command::new(BIN);
client_cmd
.args(["exchange", "secret-key"])
.arg(&secret_key_paths[1])
.arg("public-key")
@@ -91,18 +178,14 @@ fn check_exchange() {
.arg(&public_key_paths[0])
.args(["endpoint", &listen_addr])
.arg("outfile")
.arg(&shared_key_paths[1])
.stdout(Stdio::null())
.stderr(Stdio::null())
.spawn()
.expect("Failed to start {BIN}");
.arg(&shared_key_paths[1]);
// give them some time to do the key exchange
std::thread::sleep(Duration::from_secs(2));
let client_test_builder = AppServerTestBuilder::default();
// time's up, kill the childs
server.kill().unwrap();
client.kill().unwrap();
run_server_client_exchange(
(&server_cmd, server_test_builder),
(&client_cmd, client_test_builder),
);
// read the shared keys they created
let shared_keys: Vec<_> = shared_key_paths
@@ -117,3 +200,132 @@ fn check_exchange() {
// cleanup
fs::remove_dir_all(&tmpdir).unwrap();
}
// check that we can trigger a DoS condition and we can exchange keys under DoS
// This test creates a responder (server) with the feature flag "integration_test_always_under_load" to always be under load condition for the test.
#[test]
#[serial]
fn check_exchange_under_dos() {
setup_logging();
//Generate binary with responder with feature integration_test
let tmpdir = PathBuf::from(env!("CARGO_TARGET_TMPDIR")).join("exchange-dos");
fs::create_dir_all(&tmpdir).unwrap();
let secret_key_paths = [tmpdir.join("secret-key-0"), tmpdir.join("secret-key-1")];
let public_key_paths = [tmpdir.join("public-key-0"), tmpdir.join("public-key-1")];
let shared_key_paths = [tmpdir.join("shared-key-0"), tmpdir.join("shared-key-1")];
// generate key pairs
generate_key_pairs(&secret_key_paths, &public_key_paths);
// start first process, the server
let port = loop {
if let Some(port) = find_udp_socket() {
break port;
}
};
let listen_addr = format!("::1:{port}");
let mut server_cmd = std::process::Command::new(BIN);
server_cmd
.args(["exchange", "secret-key"])
.arg(&secret_key_paths[0])
.arg("public-key")
.arg(&public_key_paths[0])
.args(["listen", &listen_addr, "verbose", "peer", "public-key"])
.arg(&public_key_paths[1])
.arg("outfile")
.arg(&shared_key_paths[0]);
let server_test_builder = AppServerTestBuilder::default().enable_dos_permanently(true);
let mut client_cmd = std::process::Command::new(BIN);
client_cmd
.args(["exchange", "secret-key"])
.arg(&secret_key_paths[1])
.arg("public-key")
.arg(&public_key_paths[1])
.args(["verbose", "peer", "public-key"])
.arg(&public_key_paths[0])
.args(["endpoint", &listen_addr])
.arg("outfile")
.arg(&shared_key_paths[1]);
let client_test_builder = AppServerTestBuilder::default();
run_server_client_exchange(
(&server_cmd, server_test_builder),
(&client_cmd, client_test_builder),
);
// read the shared keys they created
let shared_keys: Vec<_> = shared_key_paths
.iter()
.map(|p| fs::read_to_string(p).unwrap())
.collect();
// check that they created two equal keys
assert_eq!(shared_keys.len(), 2);
assert_eq!(shared_keys[0], shared_keys[1]);
// cleanup
fs::remove_dir_all(&tmpdir).unwrap();
}
#[allow(dead_code)]
#[derive(Debug, Default)]
struct MockBrokerInner {
psk: Option<Secret<WG_KEY_LEN>>,
peer_id: Option<Public<WG_PEER_LEN>>,
interface: Option<String>,
}
#[derive(Debug, Default)]
struct MockBroker {
inner: Arc<Mutex<MockBrokerInner>>,
}
impl WireguardBrokerMio for MockBroker {
type MioError = anyhow::Error;
fn register(
&mut self,
_registry: &mio::Registry,
_token: mio::Token,
) -> Result<(), Self::MioError> {
Ok(())
}
fn process_poll(&mut self) -> Result<(), Self::MioError> {
Ok(())
}
fn unregister(&mut self, _registry: &mio::Registry) -> Result<(), Self::MioError> {
Ok(())
}
}
impl rosenpass_wireguard_broker::WireGuardBroker for MockBroker {
type Error = anyhow::Error;
fn set_psk(
&mut self,
config: rosenpass_wireguard_broker::SerializedBrokerConfig<'_>,
) -> Result<(), Self::Error> {
loop {
let mut lock = self.inner.try_lock();
if let Ok(ref mut mutex) = lock {
**mutex = MockBrokerInner {
psk: Some(config.psk.clone()),
peer_id: Some(config.peer_id.clone()),
interface: Some(std::str::from_utf8(config.interface).unwrap().to_string()),
};
break Ok(());
}
}
}
}

391
rp
View File

@@ -1,391 +0,0 @@
#!/usr/bin/env bash
set -e
# String formatting subsystem
formatting_init() {
endl=$'\n'
}
enquote() {
while (( $# > 1 )); do
printf "%q " "${1}"; shift
done
if (( $# == 1 )); then
printf "%q" "${1}"; shift
fi
}
multiline() {
# shellcheck disable=SC1004
echo "${1} " | awk '
function pm(a, b, l) {
return length(a) > l \
&& length(b) > l \
&& substr(a, 1, l+1) == substr(b, 1, l+1) \
? pm(a, b, l+1) : l;
}
!started && $0 !~ /^[ \t]*$/ {
started=1
match($0, /^[ \t]*/)
prefix=substr($0, 1, RLENGTH)
}
started {
print(substr($0, 1 + pm($0, prefix)));
}
'
}
dbg() {
echo >&2 "$@"
}
detect_git_dir() {
# https://stackoverflow.com/questions/3618078/pipe-only-stderr-through-a-filter
(
git -C "${scriptdir}" rev-parse --show-toplevel 3>&1 1>&2 2>&3 3>&- \
| sed '
/not a git repository/d;
s/^/WARNING: /'
) 3>&1 1>&2 2>&3 3>&-
}
# Cleanup subsystem (sigterm)
cleanup_init() {
cleanup_actions=()
trap cleanup_apply exit
}
cleanup_apply() {
local f
for f in "${cleanup_actions[@]}"; do
eval "${f}"
done
}
cleanup() {
cleanup_actions+=("$(multiline "${1}")")
}
# Transactional execution subsystem
frag_init() {
explain=0
frag_transaction=()
frag "
#! /bin/bash
set -e"
}
frag_apply() {
local f
for f in "${frag_transaction[@]}"; do
if (( explain == 1 )); then
dbg "${f}"
fi
eval "${f}"
done
}
frag() {
frag_transaction+=("$(multiline "${1}")")
}
frag_append() {
local len; len="${#frag_transaction[@]}"
frag_transaction=("${frag_transaction[@]:0:len-1}" "${frag_transaction[len-1]}${1}")
}
frag_append_esc() {
frag_append " \\${endl}${1}"
}
# Usage documentation subsystem
usage_init() {
usagestack=("${script}")
}
usage_snap() {
echo "${#usagestack}"
}
usage_restore() {
local n; n="${1}"
dbg REST "${1}"
usagestack=("${usagestack[@]:0:n-2}")
}
usage() {
dbg "Usage: ${usagestack[*]}"
}
fatal() {
dbg "FATAL: $*"
usage
exit 1
}
genkey() {
usagestack+=("PRIVATE_KEYS_DIR")
local skdir
skdir="${1%/}"; shift || fatal "Required positional argument: PRIVATE_KEYS_DIR"
while (( $# > 0 )); do
local arg; arg="$1"; shift
case "${arg}" in
-h | -help | --help | help) usage; return 0 ;;
*) fatal "Unknown option ${arg}";;
esac
done
if test -e "${skdir}"; then
fatal "PRIVATE_KEYS_DIR \"${skdir}\" already exists"
fi
frag "
umask 077
mkdir -p $(enquote "${skdir}")
wg genkey > $(enquote "${skdir}"/wgsk)
$(enquote "${binary}") gen-keys \\
-s $(enquote "${skdir}"/pqsk) \\
-p $(enquote "${skdir}"/pqpk)"
}
pubkey() {
usagestack+=("PRIVATE_KEYS_DIR" "PUBLIC_KEYS_DIR")
local skdir pkdir
skdir="${1%/}"; shift || fatal "Required positional argument: PRIVATE_KEYS_DIR"
pkdir="${1%/}"; shift || fatal "Required positional argument: PUBLIC_KEYS_DIR"
while (( $# > 0 )); do
local arg; arg="$1"; shift
case "${arg}" in
-h | -help | --help | help) usage; exit 0;;
*) fatal "Unknown option ${arg}";;
esac
done
if test -e "${pkdir}"; then
fatal "PUBLIC_KEYS_DIR \"${pkdir}\" already exists"
fi
frag "
mkdir -p $(enquote "${pkdir}")
wg pubkey < $(enquote "${skdir}"/wgsk) > $(enquote "${pkdir}/wgpk")
cp $(enquote "${skdir}"/pqpk) $(enquote "${pkdir}/pqpk")"
}
exchange() {
usagestack+=("PRIVATE_KEYS_DIR" "[dev <device>]" "[listen <ip>:<port>]" "[peer PUBLIC_KEYS_DIR [endpoint <ip>:<port>] [persistent-keepalive <interval>] [allowed-ips <ip1>/<cidr1>[,<ip2>/<cidr2>]...]]...")
local skdir dev lport
dev="${project_name}0"
skdir="${1%/}"; shift || fatal "Required positional argument: PRIVATE_KEYS_DIR"
while (( $# > 0 )); do
local arg; arg="$1"; shift
case "${arg}" in
dev) dev="${1}"; shift || fatal "dev option requires parameter";;
peer) set -- "peer" "$@"; break;; # Parsed down below
listen)
local listen; listen="${1}";
lip="${listen%:*}";
lport="${listen/*:/}";
if [[ "$lip" = "$lport" ]]; then
lip="[::]"
fi
shift;;
-h | -help | --help | help) usage; return 0;;
*) fatal "Unknown option ${arg}";;
esac
done
if (( $# == 0 )); then
fatal "Needs at least one peer specified"
fi
# os dependent setup
case "$OSTYPE" in
linux-*) # could be linux-gnu or linux-musl
frag "
# Create the WireGuard interface
ip link add dev $(enquote "${dev}") type wireguard || true"
cleanup "
ip link del dev $(enquote "${dev}") || true"
frag "
ip link set dev $(enquote "${dev}") up"
;;
freebsd*)
frag "
# load the WireGuard kernel module
kldload -n if_wg || fatal 'Cannot load if_wg kernel module'"
frag "
# Create the WireGuard interface
ifconfig wg create name $(enquote "${dev}") || true"
cleanup "
ifconfig $(enquote "${dev}") destroy || true"
frag "
ifconfig $(enquote "${dev}") up"
;;
*)
fatal "Your system $OSTYPE is not yet supported. We are happy to receive patches to address this :)"
;;
esac
frag "
# Deploy the classic wireguard private key
wg set $(enquote "${dev}") private-key $(enquote "${skdir}/wgsk")"
if test -n "${lport}"; then
frag_append "listen-port $(enquote "$(( lport + 1 ))")"
fi
frag "
# Launch the post quantum wireguard exchange daemon
$(enquote "${binary}") exchange"
if (( verbose == 1 )); then
frag_append "verbose"
fi
frag_append_esc " secret-key $(enquote "${skdir}/pqsk")"
frag_append_esc " public-key $(enquote "${skdir}/pqpk")"
if test -n "${lport}"; then
frag_append_esc " listen $(enquote "${lip}:${lport}")"
fi
usagestack+=("peer" "PUBLIC_KEYS_DIR endpoint IP:PORT")
while (( $# > 0 )); do
shift; # Skip "peer" argument
local peerdir ip port keepalive allowedips
peerdir="${1%/}"; shift || fatal "Required peer argument: PUBLIC_KEYS_DIR"
while (( $# > 0 )); do
local arg; arg="$1"; shift
case "${arg}" in
peer) set -- "peer" "$@"; break;; # Next peer
endpoint) ip="${1%:*}"; port="${1##*:}"; shift;;
persistent-keepalive) keepalive="${1}"; shift;;
allowed-ips) allowedips="${1}"; shift;;
-h | -help | --help | help) usage; return 0;;
*) fatal "Unknown option ${arg}";;
esac
done
# Public key
frag_append_esc " peer public-key $(enquote "${peerdir}/pqpk")"
# PSK
local pskfile; pskfile="${peerdir}/psk"
if test -f "${pskfile}"; then
frag_append_esc " preshared-key $(enquote "${pskfile}")"
fi
if test -n "${ip}"; then
frag_append_esc " endpoint $(enquote "${ip}:${port}")"
fi
frag_append_esc " wireguard $(enquote "${dev}") $(enquote "$(cat "${peerdir}/wgpk")")"
if test -n "${ip}"; then
frag_append_esc " endpoint $(enquote "${ip}:$(( port + 1 ))")"
fi
if test -n "${keepalive}"; then
frag_append_esc " persistent-keepalive $(enquote "${keepalive}")"
fi
if test -n "${allowedips}"; then
frag_append_esc " allowed-ips $(enquote "${allowedips}")"
fi
done
}
find_rosenpass_binary() {
local binary; binary=""
if [[ -n "${gitdir}" ]]; then
# If rp is run from the git repo, use the newest build artifact
binary=$(
find "${gitdir}/result/bin/${project_name}" \
"${gitdir}"/target/{release,debug}/"${project_name}" \
-printf "%T@ %p\n" 2>/dev/null \
| sort -nr \
| awk 'NR==1 { print($2) }'
)
elif [[ -n "${nixdir}" ]]; then
# If rp is run from nix, use the nix-installed rosenpass version
binary="${nixdir}/bin/${project_name}"
fi
if [[ -z "${binary}" ]]; then
binary="${project_name}"
fi
echo "${binary}"
}
main() {
formatting_init
cleanup_init
usage_init
frag_init
project_name="rosenpass"
verbose=0
scriptdir="$(dirname "${script}")"
gitdir="$(detect_git_dir)" || true
if [[ -d /nix ]]; then
nixdir="$(readlink -f result/bin/rp | grep -Pio '^/nix/store/[^/]+(?=/bin/[^/]+)')" || true
fi
binary="$(find_rosenpass_binary)"
# Parse command
usagestack+=("[explain]" "[verbose]" "genkey|pubkey|exchange" "[ARGS]...")
local cmd
while (( $# > 0 )); do
local arg; arg="$1"; shift
case "${arg}" in
genkey|pubkey|exchange) cmd="${arg}"; break;;
explain) explain=1;;
verbose) verbose=1;;
-h | -help | --help | help) usage; return 0 ;;
*) fatal "Unknown command ${arg}";;
esac
done
test -n "${cmd}" || fatal "No command supplied"
usagestack=("${script}")
# Execute command
usagestack+=("${cmd}")
"${cmd}" "$@"
usagestack=("${script}")
# Apply transaction
frag_apply
}
script="$0"
main "$@"

42
rp/Cargo.toml Normal file
View File

@@ -0,0 +1,42 @@
[package]
name = "rp"
version = "0.2.1"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Build post-quantum-secure VPNs with WireGuard!"
homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
anyhow = { workspace = true }
base64ct = { workspace = true }
x25519-dalek = { version = "2", features = ["static_secrets"] }
zeroize = { workspace = true }
rosenpass = { workspace = true }
rosenpass-ciphers = { workspace = true }
rosenpass-cipher-traits = { workspace = true }
rosenpass-secret-memory = { workspace = true }
rosenpass-util = { workspace = true }
rosenpass-wireguard-broker = {workspace = true}
tokio = {workspace = true}
[target.'cfg(any(target_os = "linux", target_os = "freebsd"))'.dependencies]
ctrlc-async = "3.2"
futures = "0.3"
futures-util = "0.3"
genetlink = "0.2"
rtnetlink = "0.14"
netlink-packet-core = "0.7"
netlink-packet-generic = "0.3"
netlink-packet-wireguard = "0.2"
[dev-dependencies]
tempfile = {workspace = true}
stacker = {workspace = true}
[features]
enable_memfd_alloc = []

462
rp/src/cli.rs Normal file
View File

@@ -0,0 +1,462 @@
use std::path::PathBuf;
use std::{iter::Peekable, net::SocketAddr};
use crate::exchange::{ExchangeOptions, ExchangePeer};
pub enum Command {
GenKey {
private_keys_dir: PathBuf,
},
PubKey {
private_keys_dir: PathBuf,
public_keys_dir: PathBuf,
},
Exchange(ExchangeOptions),
Help,
}
enum CommandType {
GenKey,
PubKey,
Exchange,
}
#[derive(Default)]
pub struct Cli {
pub verbose: bool,
pub command: Option<Command>,
}
fn fatal<T>(note: &str, command: Option<CommandType>) -> Result<T, String> {
match command {
Some(command) => match command {
CommandType::GenKey => Err(format!("{}\nUsage: rp genkey PRIVATE_KEYS_DIR", note)),
CommandType::PubKey => Err(format!("{}\nUsage: rp pubkey PRIVATE_KEYS_DIR PUBLIC_KEYS_DIR", note)),
CommandType::Exchange => Err(format!("{}\nUsage: rp exchange PRIVATE_KEYS_DIR [dev <device>] [listen <ip>:<port>] [peer PUBLIC_KEYS_DIR [endpoint <ip>:<port>] [persistent-keepalive <interval>] [allowed-ips <ip1>/<cidr1>[,<ip2>/<cidr2>]...]]...", note)),
},
None => Err(format!("{}\nUsage: rp [verbose] genkey|pubkey|exchange [ARGS]...", note)),
}
}
impl ExchangePeer {
pub fn parse(args: &mut &mut Peekable<impl Iterator<Item = String>>) -> Result<Self, String> {
let mut peer = ExchangePeer::default();
if let Some(public_keys_dir) = args.next() {
peer.public_keys_dir = PathBuf::from(public_keys_dir);
} else {
return fatal(
"Required positional argument: PUBLIC_KEYS_DIR",
Some(CommandType::Exchange),
);
}
while let Some(x) = args.peek() {
let x = x.as_str();
// break if next peer is being defined
if x == "peer" {
break;
}
let x = args.next().unwrap();
let x = x.as_str();
match x {
"endpoint" => {
if let Some(addr) = args.next() {
if let Ok(addr) = addr.parse::<SocketAddr>() {
peer.endpoint = Some(addr);
} else {
return fatal(
"invalid parameter for endpoint option",
Some(CommandType::Exchange),
);
}
} else {
return fatal(
"endpoint option requires parameter",
Some(CommandType::Exchange),
);
}
}
"persistent-keepalive" => {
if let Some(ka) = args.next() {
if let Ok(ka) = ka.parse::<u32>() {
peer.persistent_keepalive = Some(ka);
} else {
return fatal(
"invalid parameter for persistent-keepalive option",
Some(CommandType::Exchange),
);
}
} else {
return fatal(
"persistent-keepalive option requires parameter",
Some(CommandType::Exchange),
);
}
}
"allowed-ips" => {
if let Some(ips) = args.next() {
peer.allowed_ips = Some(ips);
} else {
return fatal(
"allowed-ips option requires parameter",
Some(CommandType::Exchange),
);
}
}
_ => {
return fatal(
&format!("Unknown option {}", x),
Some(CommandType::Exchange),
)
}
}
}
Ok(peer)
}
}
impl ExchangeOptions {
pub fn parse(mut args: &mut Peekable<impl Iterator<Item = String>>) -> Result<Self, String> {
let mut options = ExchangeOptions::default();
if let Some(private_keys_dir) = args.next() {
options.private_keys_dir = PathBuf::from(private_keys_dir);
} else {
return fatal(
"Required positional argument: PRIVATE_KEYS_DIR",
Some(CommandType::Exchange),
);
}
while let Some(x) = args.next() {
let x = x.as_str();
match x {
"dev" => {
if let Some(device) = args.next() {
options.dev = Some(device);
} else {
return fatal("dev option requires parameter", Some(CommandType::Exchange));
}
}
"listen" => {
if let Some(addr) = args.next() {
if let Ok(addr) = addr.parse::<SocketAddr>() {
options.listen = Some(addr);
} else {
return fatal(
"invalid parameter for listen option",
Some(CommandType::Exchange),
);
}
} else {
return fatal(
"listen option requires parameter",
Some(CommandType::Exchange),
);
}
}
"peer" => {
let peer = ExchangePeer::parse(&mut args)?;
options.peers.push(peer);
}
_ => {
return fatal(
&format!("Unknown option {}", x),
Some(CommandType::Exchange),
)
}
}
}
Ok(options)
}
}
impl Cli {
pub fn parse(mut args: Peekable<impl Iterator<Item = String>>) -> Result<Self, String> {
let mut cli = Cli::default();
let _ = args.next(); // skip executable name
while let Some(x) = args.next() {
let x = x.as_str();
match x {
"verbose" => {
cli.verbose = true;
}
"explain" => {
eprintln!("WARN: the explain argument is no longer supported");
}
"genkey" => {
if cli.command.is_some() {
return fatal("Too many commands supplied", None);
}
if let Some(private_keys_dir) = args.next() {
let private_keys_dir = PathBuf::from(private_keys_dir);
cli.command = Some(Command::GenKey { private_keys_dir });
} else {
return fatal(
"Required positional argument: PRIVATE_KEYS_DIR",
Some(CommandType::GenKey),
);
}
}
"pubkey" => {
if cli.command.is_some() {
return fatal("Too many commands supplied", None);
}
if let Some(private_keys_dir) = args.next() {
let private_keys_dir = PathBuf::from(private_keys_dir);
if let Some(public_keys_dir) = args.next() {
let public_keys_dir = PathBuf::from(public_keys_dir);
cli.command = Some(Command::PubKey {
private_keys_dir,
public_keys_dir,
});
} else {
return fatal(
"Required positional argument: PUBLIC_KEYS_DIR",
Some(CommandType::PubKey),
);
}
} else {
return fatal(
"Required positional argument: PRIVATE_KEYS_DIR",
Some(CommandType::PubKey),
);
}
}
"exchange" => {
if cli.command.is_some() {
return fatal("Too many commands supplied", None);
}
let options = ExchangeOptions::parse(&mut args)?;
cli.command = Some(Command::Exchange(options));
}
"help" => {
cli.command = Some(Command::Help);
}
_ => return fatal(&format!("Unknown command {}", x), None),
};
}
if cli.command.is_none() {
return fatal("No command supplied", None);
}
Ok(cli)
}
}
#[cfg(test)]
mod tests {
use crate::cli::{Cli, Command};
#[inline]
fn parse(arr: &[&str]) -> Result<Cli, String> {
Cli::parse(arr.iter().map(|x| x.to_string()).peekable())
}
#[inline]
fn parse_err(arr: &[&str]) -> bool {
parse(arr).is_err()
}
#[test]
fn bare_errors() {
assert!(parse_err(&["rp"]));
assert!(parse_err(&["rp", "verbose"]));
assert!(parse_err(&["rp", "thiscommanddoesntexist"]));
assert!(parse_err(&[
"rp",
"thiscommanddoesntexist",
"genkey",
"./fakedir/"
]));
}
#[test]
fn genkey_errors() {
assert!(parse_err(&["rp", "genkey"]));
}
#[test]
fn genkey_works() {
let cli = parse(&["rp", "genkey", "./fakedir"]);
assert!(cli.is_ok());
let cli = cli.unwrap();
assert!(!cli.verbose);
assert!(matches!(cli.command, Some(Command::GenKey { .. })));
match cli.command {
Some(Command::GenKey { private_keys_dir }) => {
assert_eq!(private_keys_dir.to_str().unwrap(), "./fakedir");
}
_ => unreachable!(),
};
}
#[test]
fn pubkey_errors() {
assert!(parse_err(&["rp", "pubkey"]));
assert!(parse_err(&["rp", "pubkey", "./fakedir"]));
}
#[test]
fn pubkey_works() {
let cli = parse(&["rp", "pubkey", "./fakedir", "./fakedir2"]);
assert!(cli.is_ok());
let cli = cli.unwrap();
assert!(!cli.verbose);
assert!(matches!(cli.command, Some(Command::PubKey { .. })));
match cli.command {
Some(Command::PubKey {
private_keys_dir,
public_keys_dir,
}) => {
assert_eq!(private_keys_dir.to_str().unwrap(), "./fakedir");
assert_eq!(public_keys_dir.to_str().unwrap(), "./fakedir2");
}
_ => unreachable!(),
}
}
#[test]
fn exchange_errors() {
assert!(parse_err(&["rp", "exchange"]));
assert!(parse_err(&[
"rp",
"exchange",
"./fakedir",
"notarealoption"
]));
assert!(parse_err(&["rp", "exchange", "./fakedir", "listen"]));
assert!(parse_err(&[
"rp",
"exchange",
"./fakedir",
"listen",
"notarealip"
]));
}
#[test]
fn exchange_works() {
let cli = parse(&["rp", "exchange", "./fakedir"]);
assert!(cli.is_ok());
let cli = cli.unwrap();
assert!(!cli.verbose);
assert!(matches!(cli.command, Some(Command::Exchange(_))));
match cli.command {
Some(Command::Exchange(options)) => {
assert_eq!(options.private_keys_dir.to_str().unwrap(), "./fakedir");
assert!(options.dev.is_none());
assert!(options.listen.is_none());
assert_eq!(options.peers.len(), 0);
}
_ => unreachable!(),
}
let cli = parse(&[
"rp",
"exchange",
"./fakedir",
"dev",
"devname",
"listen",
"127.0.0.1:1234",
]);
assert!(cli.is_ok());
let cli = cli.unwrap();
assert!(!cli.verbose);
assert!(matches!(cli.command, Some(Command::Exchange(_))));
match cli.command {
Some(Command::Exchange(options)) => {
assert_eq!(options.private_keys_dir.to_str().unwrap(), "./fakedir");
assert_eq!(options.dev, Some("devname".to_string()));
assert_eq!(options.listen, Some("127.0.0.1:1234".parse().unwrap()));
assert_eq!(options.peers.len(), 0);
}
_ => unreachable!(),
}
let cli = parse(&[
"rp",
"exchange",
"./fakedir",
"dev",
"devname",
"listen",
"127.0.0.1:1234",
"peer",
"./fakedir2",
"endpoint",
"127.0.0.1:2345",
"persistent-keepalive",
"15",
"allowed-ips",
"123.234.11.0/24,1.1.1.0/24",
"peer",
"./fakedir3",
"endpoint",
"127.0.0.1:5432",
"persistent-keepalive",
"30",
]);
assert!(cli.is_ok());
let cli = cli.unwrap();
assert!(!cli.verbose);
assert!(matches!(cli.command, Some(Command::Exchange(_))));
match cli.command {
Some(Command::Exchange(options)) => {
assert_eq!(options.private_keys_dir.to_str().unwrap(), "./fakedir");
assert_eq!(options.dev, Some("devname".to_string()));
assert_eq!(options.listen, Some("127.0.0.1:1234".parse().unwrap()));
assert_eq!(options.peers.len(), 2);
let peer = &options.peers[0];
assert_eq!(peer.public_keys_dir.to_str().unwrap(), "./fakedir2");
assert_eq!(peer.endpoint, Some("127.0.0.1:2345".parse().unwrap()));
assert_eq!(peer.persistent_keepalive, Some(15));
assert_eq!(
peer.allowed_ips,
Some("123.234.11.0/24,1.1.1.0/24".to_string())
);
let peer = &options.peers[1];
assert_eq!(peer.public_keys_dir.to_str().unwrap(), "./fakedir3");
assert_eq!(peer.endpoint, Some("127.0.0.1:5432".parse().unwrap()));
assert_eq!(peer.persistent_keepalive, Some(30));
assert!(peer.allowed_ips.is_none());
}
_ => unreachable!(),
}
}
}

279
rp/src/exchange.rs Normal file
View File

@@ -0,0 +1,279 @@
use std::{net::SocketAddr, path::PathBuf};
use anyhow::Result;
use crate::key::WG_B64_LEN;
#[derive(Default)]
pub struct ExchangePeer {
pub public_keys_dir: PathBuf,
pub endpoint: Option<SocketAddr>,
pub persistent_keepalive: Option<u32>,
pub allowed_ips: Option<String>,
}
#[derive(Default)]
pub struct ExchangeOptions {
pub verbose: bool,
pub private_keys_dir: PathBuf,
pub dev: Option<String>,
pub listen: Option<SocketAddr>,
pub peers: Vec<ExchangePeer>,
}
#[cfg(not(any(target_os = "linux", target_os = "freebsd")))]
pub async fn exchange(_: ExchangeOptions) -> Result<()> {
use anyhow::anyhow;
Err(anyhow!(
"Your system {} is not yet supported. We are happy to receive patches to address this :)",
std::env::consts::OS
))
}
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
mod netlink {
use anyhow::Result;
use futures_util::{StreamExt as _, TryStreamExt as _};
use genetlink::GenetlinkHandle;
use netlink_packet_core::{NLM_F_ACK, NLM_F_REQUEST};
use netlink_packet_wireguard::nlas::WgDeviceAttrs;
use rtnetlink::Handle;
pub async fn link_create_and_up(rtnetlink: &Handle, link_name: String) -> Result<u32> {
// add the link
rtnetlink
.link()
.add()
.wireguard(link_name.clone())
.execute()
.await?;
// retrieve the link to be able to up it
let link = rtnetlink
.link()
.get()
.match_name(link_name.clone())
.execute()
.into_stream()
.into_future()
.await
.0
.unwrap()?;
// up the link
rtnetlink
.link()
.set(link.header.index)
.up()
.execute()
.await?;
Ok(link.header.index)
}
pub async fn link_cleanup(rtnetlink: &Handle, index: u32) -> Result<()> {
rtnetlink.link().del(index).execute().await?;
Ok(())
}
pub async fn link_cleanup_standalone(index: u32) -> Result<()> {
let (connection, rtnetlink, _) = rtnetlink::new_connection()?;
tokio::spawn(connection);
// We don't care if this fails, as the device may already have been auto-cleaned up.
let _ = rtnetlink.link().del(index).execute().await;
Ok(())
}
/// This replicates the functionality of the `wg set` command line tool.
///
/// It sets the specified WireGuard attributes of the indexed device by
/// communicating with WireGuard's generic netlink interface, like the
/// `wg` tool does.
pub async fn wg_set(
genetlink: &mut GenetlinkHandle,
index: u32,
mut attr: Vec<WgDeviceAttrs>,
) -> Result<()> {
use futures_util::StreamExt as _;
use netlink_packet_core::{NetlinkMessage, NetlinkPayload};
use netlink_packet_generic::GenlMessage;
use netlink_packet_wireguard::{Wireguard, WireguardCmd};
// Scope our `set` command to only the device of the specified index
attr.insert(0, WgDeviceAttrs::IfIndex(index));
// Construct the WireGuard-specific netlink packet
let wgc = Wireguard {
cmd: WireguardCmd::SetDevice,
nlas: attr,
};
// Construct final message
let genl = GenlMessage::from_payload(wgc);
let mut nlmsg = NetlinkMessage::from(genl);
nlmsg.header.flags = NLM_F_REQUEST | NLM_F_ACK;
// Send and wait for the ACK or error
let (res, _) = genetlink.request(nlmsg).await?.into_future().await;
if let Some(res) = res {
let res = res?;
if let NetlinkPayload::Error(err) = res.payload {
return Err(err.to_io().into());
}
}
Ok(())
}
}
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
pub async fn exchange(options: ExchangeOptions) -> Result<()> {
use std::fs;
use anyhow::anyhow;
use netlink_packet_wireguard::{constants::WG_KEY_LEN, nlas::WgDeviceAttrs};
use rosenpass::{
app_server::{AppServer, BrokerPeer},
config::Verbosity,
protocol::{SPk, SSk, SymKey},
};
use rosenpass_secret_memory::Secret;
use rosenpass_util::file::{LoadValue as _, LoadValueB64};
use rosenpass_wireguard_broker::brokers::native_unix::{
NativeUnixBroker, NativeUnixBrokerConfigBaseBuilder, NativeUnixBrokerConfigBaseBuilderError,
};
let (connection, rtnetlink, _) = rtnetlink::new_connection()?;
tokio::spawn(connection);
let link_name = options.dev.unwrap_or("rosenpass0".to_string());
let link_index = netlink::link_create_and_up(&rtnetlink, link_name.clone()).await?;
ctrlc_async::set_async_handler(async move {
netlink::link_cleanup_standalone(link_index)
.await
.expect("Failed to clean up");
})?;
// Deploy the classic wireguard private key
let (connection, mut genetlink, _) = genetlink::new_connection()?;
tokio::spawn(connection);
let wgsk_path = options.private_keys_dir.join("wgsk");
let wgsk = Secret::<WG_KEY_LEN>::load_b64::<WG_B64_LEN, _>(wgsk_path)?;
let mut attr: Vec<WgDeviceAttrs> = Vec::with_capacity(2);
attr.push(WgDeviceAttrs::PrivateKey(*wgsk.secret()));
if let Some(listen) = options.listen {
if listen.port() == u16::MAX {
return Err(anyhow!("You may not use {} as the listen port.", u16::MAX));
}
attr.push(WgDeviceAttrs::ListenPort(listen.port() + 1));
}
netlink::wg_set(&mut genetlink, link_index, attr).await?;
let pqsk = options.private_keys_dir.join("pqsk");
let pqpk = options.private_keys_dir.join("pqpk");
let sk = SSk::load(&pqsk)?;
let pk = SPk::load(&pqpk)?;
let mut srv = Box::new(AppServer::new(
sk,
pk,
if let Some(listen) = options.listen {
vec![listen]
} else {
Vec::with_capacity(0)
},
if options.verbose {
Verbosity::Verbose
} else {
Verbosity::Quiet
},
None,
)?);
let broker_store_ptr = srv.register_broker(Box::new(NativeUnixBroker::new()))?;
fn cfg_err_map(e: NativeUnixBrokerConfigBaseBuilderError) -> anyhow::Error {
anyhow::Error::msg(format!("NativeUnixBrokerConfigBaseBuilderError: {:?}", e))
}
for peer in options.peers {
let wgpk = peer.public_keys_dir.join("wgpk");
let pqpk = peer.public_keys_dir.join("pqpk");
let psk = peer.public_keys_dir.join("psk");
let mut extra_params: Vec<String> = Vec::with_capacity(6);
if let Some(endpoint) = peer.endpoint {
extra_params.push("endpoint".to_string());
// Peer endpoints always use (port + 1) in wg set params
let endpoint = SocketAddr::new(endpoint.ip(), endpoint.port() + 1);
extra_params.push(endpoint.to_string());
}
if let Some(persistent_keepalive) = peer.persistent_keepalive {
extra_params.push("persistent-keepalive".to_string());
extra_params.push(persistent_keepalive.to_string());
}
if let Some(allowed_ips) = &peer.allowed_ips {
extra_params.push("allowed-ips".to_string());
extra_params.push(allowed_ips.clone());
}
let peer_cfg = NativeUnixBrokerConfigBaseBuilder::default()
.peer_id_b64(&fs::read_to_string(wgpk)?)?
.interface(link_name.clone())
.extra_params_ser(&extra_params)?
.build()
.map_err(cfg_err_map)?;
let broker_peer = Some(BrokerPeer::new(
broker_store_ptr.clone(),
Box::new(peer_cfg),
));
srv.add_peer(
if psk.exists() {
Some(SymKey::load_b64::<WG_B64_LEN, _>(psk))
} else {
None
}
.transpose()?,
SPk::load(&pqpk)?,
None,
broker_peer,
peer.endpoint.map(|x| x.to_string()),
)?;
}
let out = srv.event_loop();
netlink::link_cleanup(&rtnetlink, link_index).await?;
match out {
Ok(_) => Ok(()),
Err(e) => {
// Check if the returned error is actually EINTR, in which case, the run actually succeeded.
let is_ok = if let Some(e) = e.root_cause().downcast_ref::<std::io::Error>() {
matches!(e.kind(), std::io::ErrorKind::Interrupted)
} else {
false
};
if is_ok {
Ok(())
} else {
Err(e)
}
}
}
}

151
rp/src/key.rs Normal file
View File

@@ -0,0 +1,151 @@
use std::{
fs::{self, DirBuilder},
os::unix::fs::{DirBuilderExt, PermissionsExt},
path::Path,
};
use anyhow::{anyhow, Result};
use rosenpass_util::file::{LoadValueB64, StoreValueB64};
use zeroize::Zeroize;
use rosenpass::protocol::{SPk, SSk};
use rosenpass_cipher_traits::Kem;
use rosenpass_ciphers::kem::StaticKem;
use rosenpass_secret_memory::{file::StoreSecret as _, Public, Secret};
pub const WG_B64_LEN: usize = 32 * 5 / 3;
#[cfg(not(target_family = "unix"))]
pub fn genkey(_: &Path) -> Result<()> {
Err(anyhow!(
"Your system {} is not yet supported. We are happy to receive patches to address this :)",
std::env::consts::OS
))
}
#[cfg(target_family = "unix")]
pub fn genkey(private_keys_dir: &Path) -> Result<()> {
if private_keys_dir.exists() {
if fs::metadata(private_keys_dir)?.permissions().mode() != 0o700 {
return Err(anyhow!(
"Directory {:?} has incorrect permissions: please use 0700 for proper security.",
private_keys_dir
));
}
} else {
DirBuilder::new()
.recursive(true)
.mode(0o700)
.create(private_keys_dir)?;
}
let wgsk_path = private_keys_dir.join("wgsk");
let pqsk_path = private_keys_dir.join("pqsk");
let pqpk_path = private_keys_dir.join("pqpk");
if !wgsk_path.exists() {
let wgsk: Secret<32> = Secret::random();
wgsk.store_b64::<WG_B64_LEN, _>(wgsk_path)?;
} else {
eprintln!(
"WireGuard secret key already exists at {:#?}: not regenerating",
wgsk_path
);
}
if !pqsk_path.exists() && !pqpk_path.exists() {
let mut pqsk = SSk::random();
let mut pqpk = SPk::random();
StaticKem::keygen(pqsk.secret_mut(), pqpk.secret_mut())?;
pqpk.store_secret(pqpk_path)?;
pqsk.store_secret(pqsk_path)?;
} else {
eprintln!(
"Rosenpass keys already exist in {:#?}: not regenerating",
private_keys_dir
);
}
Ok(())
}
pub fn pubkey(private_keys_dir: &Path, public_keys_dir: &Path) -> Result<()> {
if public_keys_dir.exists() {
return Err(anyhow!("Directory {:?} already exists", public_keys_dir));
}
fs::create_dir_all(public_keys_dir)?;
let private_wgsk = private_keys_dir.join("wgsk");
let public_wgpk = public_keys_dir.join("wgpk");
let private_pqpk = private_keys_dir.join("pqpk");
let public_pqpk = public_keys_dir.join("pqpk");
let wgsk = Secret::load_b64::<WG_B64_LEN, _>(private_wgsk)?;
let mut wgpk: Public<32> = {
let mut secret = x25519_dalek::StaticSecret::from(*wgsk.secret());
let public = x25519_dalek::PublicKey::from(&secret);
secret.zeroize();
Public::from_slice(public.as_bytes())
};
wgpk.store_b64::<WG_B64_LEN, _>(public_wgpk)?;
wgpk.zeroize();
fs::copy(private_pqpk, public_pqpk)?;
Ok(())
}
#[cfg(test)]
mod tests {
use std::fs;
use rosenpass::protocol::{SPk, SSk};
use rosenpass_secret_memory::secret_policy_try_use_memfd_secrets;
use rosenpass_secret_memory::Secret;
use rosenpass_util::file::LoadValue;
use rosenpass_util::file::LoadValueB64;
use tempfile::tempdir;
use crate::key::{genkey, pubkey, WG_B64_LEN};
#[test]
fn test_key_loopback() {
secret_policy_try_use_memfd_secrets();
let private_keys_dir = tempdir().unwrap();
fs::remove_dir(private_keys_dir.path()).unwrap();
// Guranteed to have 16MB of stack size
stacker::grow(8 * 1024 * 1024, || {
assert!(genkey(private_keys_dir.path()).is_ok());
});
assert!(private_keys_dir.path().exists());
assert!(private_keys_dir.path().is_dir());
assert!(SPk::load(private_keys_dir.path().join("pqpk")).is_ok());
assert!(SSk::load(private_keys_dir.path().join("pqsk")).is_ok());
assert!(
Secret::<32>::load_b64::<WG_B64_LEN, _>(private_keys_dir.path().join("wgsk")).is_ok()
);
let public_keys_dir = tempdir().unwrap();
fs::remove_dir(public_keys_dir.path()).unwrap();
// Guranteed to have 16MB of stack size
stacker::grow(8 * 1024 * 1024, || {
assert!(pubkey(private_keys_dir.path(), public_keys_dir.path()).is_ok());
});
assert!(public_keys_dir.path().exists());
assert!(public_keys_dir.path().is_dir());
assert!(SPk::load(public_keys_dir.path().join("pqpk")).is_ok());
assert!(
Secret::<32>::load_b64::<WG_B64_LEN, _>(public_keys_dir.path().join("wgpk")).is_ok()
);
let pk_1 = fs::read(private_keys_dir.path().join("pqpk")).unwrap();
let pk_2 = fs::read(public_keys_dir.path().join("pqpk")).unwrap();
assert_eq!(pk_1, pk_2);
}
}

52
rp/src/main.rs Normal file
View File

@@ -0,0 +1,52 @@
use std::process::exit;
use cli::{Cli, Command};
use exchange::exchange;
use key::{genkey, pubkey};
use rosenpass_secret_memory::policy;
mod cli;
mod exchange;
mod key;
#[tokio::main]
async fn main() {
#[cfg(feature = "enable_memfd_alloc")]
policy::secret_policy_try_use_memfd_secrets();
#[cfg(not(feature = "enable_memfd_alloc"))]
policy::secret_policy_use_only_malloc_secrets();
let cli = match Cli::parse(std::env::args().peekable()) {
Ok(cli) => cli,
Err(err) => {
eprintln!("{}", err);
exit(1);
}
};
let command = cli.command.unwrap();
let res = match command {
Command::GenKey { private_keys_dir } => genkey(&private_keys_dir),
Command::PubKey {
private_keys_dir,
public_keys_dir,
} => pubkey(&private_keys_dir, &public_keys_dir),
Command::Exchange(mut options) => {
options.verbose = cli.verbose;
exchange(options).await
}
Command::Help => {
println!("Usage: rp [verbose] genkey|pubkey|exchange [ARGS]...");
Ok(())
}
};
match res {
Ok(_) => {}
Err(err) => {
eprintln!("An error occurred: {}", err);
exit(1);
}
}
}

View File

@@ -21,3 +21,6 @@ log = { workspace = true }
[dev-dependencies]
allocator-api2-tests = { workspace = true }
tempfile = {workspace = true}
base64ct = {workspace = true}
procspawn = {workspace = true}

View File

@@ -4,37 +4,41 @@ use std::ptr::NonNull;
use allocator_api2::alloc::{AllocError, Allocator, Layout};
#[derive(Copy, Clone, Default)]
struct MemsecAllocatorContents;
struct MallocAllocatorContents;
/// Memory allocation using using the memsec crate
#[derive(Copy, Clone, Default)]
pub struct MemsecAllocator {
_dummy_private_data: MemsecAllocatorContents,
pub struct MallocAllocator {
_dummy_private_data: MallocAllocatorContents,
}
/// A box backed by the memsec allocator
pub type MemsecBox<T> = allocator_api2::boxed::Box<T, MemsecAllocator>;
pub type MallocBox<T> = allocator_api2::boxed::Box<T, MallocAllocator>;
/// A vector backed by the memsec allocator
pub type MemsecVec<T> = allocator_api2::vec::Vec<T, MemsecAllocator>;
pub type MallocVec<T> = allocator_api2::vec::Vec<T, MallocAllocator>;
pub fn memsec_box<T>(x: T) -> MemsecBox<T> {
MemsecBox::<T>::new_in(x, MemsecAllocator::new())
pub fn malloc_box_try<T>(x: T) -> Result<MallocBox<T>, AllocError> {
MallocBox::<T>::try_new_in(x, MallocAllocator::new())
}
pub fn memsec_vec<T>() -> MemsecVec<T> {
MemsecVec::<T>::new_in(MemsecAllocator::new())
pub fn malloc_box<T>(x: T) -> MallocBox<T> {
MallocBox::<T>::new_in(x, MallocAllocator::new())
}
impl MemsecAllocator {
pub fn malloc_vec<T>() -> MallocVec<T> {
MallocVec::<T>::new_in(MallocAllocator::new())
}
impl MallocAllocator {
pub fn new() -> Self {
Self {
_dummy_private_data: MemsecAllocatorContents,
_dummy_private_data: MallocAllocatorContents,
}
}
}
unsafe impl Allocator for MemsecAllocator {
unsafe impl Allocator for MallocAllocator {
fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
// Call memsec allocator
let mem: Option<NonNull<[u8]>> = unsafe { memsec::malloc_sized(layout.size()) };
@@ -48,8 +52,8 @@ unsafe impl Allocator for MemsecAllocator {
// Ensure the right alignment is used
let off = (mem.as_ptr() as *const u8).align_offset(layout.align());
if off != 0 {
log::error!("Allocation {layout:?} was requested but memsec returned allocation \
with offset {off} from the requested alignment. Memsec always allocates values \
log::error!("Allocation {layout:?} was requested but malloc-based memsec returned allocation \
with offset {off} from the requested alignment. Malloc always allocates values \
at the end of a memory page for security reasons, custom alignments are not supported. \
You could try allocating an oversized value.");
unsafe { memsec::free(mem) };
@@ -66,7 +70,7 @@ unsafe impl Allocator for MemsecAllocator {
}
}
impl fmt::Debug for MemsecAllocator {
impl fmt::Debug for MallocAllocator {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
fmt.write_str("<memsec based Rust allocator>")
}
@@ -78,21 +82,21 @@ mod test {
use super::*;
make_test! { test_sizes(MemsecAllocator::new()) }
make_test! { test_vec(MemsecAllocator::new()) }
make_test! { test_many_boxes(MemsecAllocator::new()) }
make_test! { test_sizes(MallocAllocator::new()) }
make_test! { test_vec(MallocAllocator::new()) }
make_test! { test_many_boxes(MallocAllocator::new()) }
#[test]
fn memsec_allocation() {
let alloc = MemsecAllocator::new();
memsec_allocation_impl::<0>(&alloc);
memsec_allocation_impl::<7>(&alloc);
memsec_allocation_impl::<8>(&alloc);
memsec_allocation_impl::<64>(&alloc);
memsec_allocation_impl::<999>(&alloc);
fn malloc_allocation() {
let alloc = MallocAllocator::new();
malloc_allocation_impl::<0>(&alloc);
malloc_allocation_impl::<7>(&alloc);
malloc_allocation_impl::<8>(&alloc);
malloc_allocation_impl::<64>(&alloc);
malloc_allocation_impl::<999>(&alloc);
}
fn memsec_allocation_impl<const N: usize>(alloc: &MemsecAllocator) {
fn malloc_allocation_impl<const N: usize>(alloc: &MallocAllocator) {
let layout = Layout::new::<[u8; N]>();
let mem = alloc.allocate(layout).unwrap();

View File

@@ -0,0 +1,112 @@
#![cfg(target_os = "linux")]
use std::fmt;
use std::ptr::NonNull;
use allocator_api2::alloc::{AllocError, Allocator, Layout};
#[derive(Copy, Clone, Default)]
struct MemfdSecAllocatorContents;
/// Memory allocation using using the memsec crate
#[derive(Copy, Clone, Default)]
pub struct MemfdSecAllocator {
_dummy_private_data: MemfdSecAllocatorContents,
}
/// A box backed by the memsec allocator
pub type MemfdSecBox<T> = allocator_api2::boxed::Box<T, MemfdSecAllocator>;
/// A vector backed by the memsec allocator
pub type MemfdSecVec<T> = allocator_api2::vec::Vec<T, MemfdSecAllocator>;
pub fn memfdsec_box_try<T>(x: T) -> Result<MemfdSecBox<T>, AllocError> {
MemfdSecBox::<T>::try_new_in(x, MemfdSecAllocator::new())
}
pub fn memfdsec_box<T>(x: T) -> MemfdSecBox<T> {
MemfdSecBox::<T>::new_in(x, MemfdSecAllocator::new())
}
pub fn memfdsec_vec<T>() -> MemfdSecVec<T> {
MemfdSecVec::<T>::new_in(MemfdSecAllocator::new())
}
impl MemfdSecAllocator {
pub fn new() -> Self {
Self {
_dummy_private_data: MemfdSecAllocatorContents,
}
}
}
unsafe impl Allocator for MemfdSecAllocator {
fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
// Call memsec allocator
let mem: Option<NonNull<[u8]>> = unsafe { memsec::memfd_secret_sized(layout.size()) };
// Unwrap the option
let Some(mem) = mem else {
log::error!("Allocation {layout:?} was requested but memfd-based memsec returned a null pointer");
return Err(AllocError);
};
// Ensure the right alignment is used
let off = (mem.as_ptr() as *const u8).align_offset(layout.align());
if off != 0 {
log::error!("Allocation {layout:?} was requested but memfd-based memsec returned allocation \
with offset {off} from the requested alignment. Memfd always allocates values \
at the end of a memory page for security reasons, custom alignments are not supported. \
You could try allocating an oversized value.");
unsafe { memsec::free_memfd_secret(mem) };
return Err(AllocError);
};
Ok(mem)
}
unsafe fn deallocate(&self, ptr: NonNull<u8>, _layout: Layout) {
unsafe {
memsec::free_memfd_secret(ptr);
}
}
}
impl fmt::Debug for MemfdSecAllocator {
fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
fmt.write_str("<memsec based Rust allocator>")
}
}
#[cfg(test)]
mod test {
use allocator_api2_tests::make_test;
use super::*;
make_test! { test_sizes(MemfdSecAllocator::new()) }
make_test! { test_vec(MemfdSecAllocator::new()) }
make_test! { test_many_boxes(MemfdSecAllocator::new()) }
#[test]
fn memfdsec_allocation() {
let alloc = MemfdSecAllocator::new();
memfdsec_allocation_impl::<0>(&alloc);
memfdsec_allocation_impl::<7>(&alloc);
memfdsec_allocation_impl::<8>(&alloc);
memfdsec_allocation_impl::<64>(&alloc);
memfdsec_allocation_impl::<999>(&alloc);
}
fn memfdsec_allocation_impl<const N: usize>(alloc: &MemfdSecAllocator) {
let layout = Layout::new::<[u8; N]>();
let mem = alloc.allocate(layout).unwrap();
// https://libsodium.gitbook.io/doc/memory_management#guarded-heap-allocations
// promises us that allocated memory is initialized with the magic byte 0xDB
// and memsec promises to provide a reimplementation of the libsodium mechanism;
// it uses the magic value 0xD0 though
assert_eq!(unsafe { mem.as_ref() }, &[0xD0u8; N]);
let mem = NonNull::new(mem.as_ptr() as *mut u8).unwrap();
unsafe { alloc.deallocate(mem, layout) };
}
}

View File

@@ -0,0 +1,2 @@
pub mod malloc;
pub mod memfdsec;

View File

@@ -1,6 +1,86 @@
pub mod memsec;
pub use crate::alloc::memsec::{
memsec_box as secret_box, memsec_vec as secret_vec, MemsecAllocator as SecretAllocator,
MemsecBox as SecretBox, MemsecVec as SecretVec,
};
use std::sync::OnceLock;
use allocator_api2::alloc::{AllocError, Allocator};
use memsec::malloc::MallocAllocator;
#[cfg(target_os = "linux")]
use memsec::memfdsec::MemfdSecAllocator;
static ALLOC_TYPE: OnceLock<SecretAllocType> = OnceLock::new();
/// Sets the secret allocation type to use.
/// Intended usage at startup before secret allocation
/// takes place
pub fn set_secret_alloc_type(alloc_type: SecretAllocType) {
ALLOC_TYPE.set(alloc_type).unwrap();
}
pub fn get_or_init_secret_alloc_type(alloc_type: SecretAllocType) -> SecretAllocType {
*ALLOC_TYPE.get_or_init(|| alloc_type)
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum SecretAllocType {
MemsecMalloc,
#[cfg(target_os = "linux")]
MemsecMemfdSec,
}
pub struct SecretAlloc {
alloc_type: SecretAllocType,
}
impl Default for SecretAlloc {
fn default() -> Self {
Self {
alloc_type: *ALLOC_TYPE.get().expect(
"Secret security policy not specified. \
Run the specifying policy function in \
rosenpass_secret_memory::policy or set a \
custom policy by initializing \
rosenpass_secret_memory::alloc::ALLOC_TYPE \
before using secrets",
),
}
}
}
unsafe impl Allocator for SecretAlloc {
fn allocate(
&self,
layout: std::alloc::Layout,
) -> Result<std::ptr::NonNull<[u8]>, allocator_api2::alloc::AllocError> {
match self.alloc_type {
SecretAllocType::MemsecMalloc => MallocAllocator::default().allocate(layout),
#[cfg(target_os = "linux")]
SecretAllocType::MemsecMemfdSec => MemfdSecAllocator::default().allocate(layout),
}
}
unsafe fn deallocate(&self, ptr: std::ptr::NonNull<u8>, layout: std::alloc::Layout) {
match self.alloc_type {
SecretAllocType::MemsecMalloc => MallocAllocator::default().deallocate(ptr, layout),
#[cfg(target_os = "linux")]
SecretAllocType::MemsecMemfdSec => MemfdSecAllocator::default().deallocate(ptr, layout),
}
}
}
pub type SecretBox<T> = allocator_api2::boxed::Box<T, SecretAlloc>;
/// A vector backed by the memsec allocator
pub type SecretVec<T> = allocator_api2::vec::Vec<T, SecretAlloc>;
pub fn secret_box_try<T>(x: T) -> Result<SecretBox<T>, AllocError> {
SecretBox::<T>::try_new_in(x, SecretAlloc::default())
}
pub fn secret_box<T>(x: T) -> SecretBox<T> {
SecretBox::<T>::new_in(x, SecretAlloc::default())
}
pub fn secret_vec<T>() -> SecretVec<T> {
SecretVec::<T>::new_in(SecretAlloc::default())
}

View File

@@ -4,4 +4,5 @@ pub trait StoreSecret {
type Error;
fn store_secret<P: AsRef<Path>>(&self, path: P) -> Result<(), Self::Error>;
fn store<P: AsRef<Path>>(&self, path: P) -> Result<(), Self::Error>;
}

View File

@@ -9,3 +9,6 @@ pub use crate::public::Public;
mod secret;
pub use crate::secret::Secret;
pub mod policy;
pub use crate::policy::*;

View File

@@ -0,0 +1,82 @@
pub fn secret_policy_try_use_memfd_secrets() {
let alloc_type = {
#[cfg(target_os = "linux")]
{
if crate::alloc::memsec::memfdsec::memfdsec_box_try(0u8).is_ok() {
crate::alloc::SecretAllocType::MemsecMemfdSec
} else {
crate::alloc::SecretAllocType::MemsecMalloc
}
}
#[cfg(not(target_os = "linux"))]
{
crate::alloc::SecretAllocType::MemsecMalloc
}
};
assert_eq!(
alloc_type,
crate::alloc::get_or_init_secret_alloc_type(alloc_type)
);
log::info!("Secrets will be allocated using {:?}", alloc_type);
}
#[cfg(target_os = "linux")]
pub fn secret_policy_use_only_memfd_secrets() {
let alloc_type = crate::alloc::SecretAllocType::MemsecMemfdSec;
assert_eq!(
alloc_type,
crate::alloc::get_or_init_secret_alloc_type(alloc_type)
);
log::info!("Secrets will be allocated using {:?}", alloc_type);
}
pub fn secret_policy_use_only_malloc_secrets() {
let alloc_type = crate::alloc::SecretAllocType::MemsecMalloc;
assert_eq!(
alloc_type,
crate::alloc::get_or_init_secret_alloc_type(alloc_type)
);
log::info!("Secrets will be allocated using {:?}", alloc_type);
}
pub mod test {
#[macro_export]
macro_rules! test_spawn_process_with_policies {
($body:block, $($f: expr),*) => {
$(
let handle = procspawn::spawn((), |_| {
$f();
$body
});
handle.join().unwrap();
)*
};
}
#[macro_export]
macro_rules! test_spawn_process_provided_policies {
($body: block) => {
$crate::test_spawn_process_with_policies!(
$body,
$crate::policy::secret_policy_try_use_memfd_secrets,
$crate::secret_policy_use_only_malloc_secrets
);
#[cfg(target_os = "linux")]
{
$crate::test_spawn_process_with_policies!(
$body,
$crate::policy::secret_policy_use_only_memfd_secrets
);
}
};
}
}

View File

@@ -1,10 +1,16 @@
use crate::debug::debug_crypto_array;
use anyhow::Context;
use rand::{Fill as Randomize, Rng};
use rosenpass_to::{ops::copy_slice, To};
use rosenpass_util::file::{fopen_r, LoadValue, ReadExactToEnd, StoreValue};
use rosenpass_util::b64::{b64_decode, b64_encode};
use rosenpass_util::file::{
fopen_r, fopen_w, LoadValue, LoadValueB64, ReadExactToEnd, ReadSliceToEnd, StoreValue,
StoreValueB64, StoreValueB64Writer, Visibility,
};
use rosenpass_util::functional::mutating;
use std::borrow::{Borrow, BorrowMut};
use std::fmt;
use std::io::Write;
use std::ops::{Deref, DerefMut};
use std::path::Path;
@@ -110,3 +116,164 @@ impl<const N: usize> StoreValue for Public<N> {
Ok(())
}
}
impl<const N: usize> LoadValueB64 for Public<N> {
type Error = anyhow::Error;
fn load_b64<const F: usize, P: AsRef<Path>>(path: P) -> Result<Self, Self::Error>
where
Self: Sized,
{
let mut f = [0u8; F];
let mut v = Public::zero();
let p = path.as_ref();
let len = fopen_r(p)?
.read_slice_to_end(&mut f)
.with_context(|| format!("Could not load file {p:?}"))?;
b64_decode(&f[0..len], &mut v.value)
.with_context(|| format!("Could not decode base64 file {p:?}"))?;
Ok(v)
}
}
impl<const N: usize> StoreValueB64 for Public<N> {
type Error = anyhow::Error;
fn store_b64<const F: usize, P: AsRef<Path>>(&self, path: P) -> anyhow::Result<()> {
let p = path.as_ref();
let mut f = [0u8; F];
let encoded_str = b64_encode(&self.value, &mut f)
.with_context(|| format!("Could not encode base64 file {p:?}"))?;
fopen_w(p, Visibility::Public)?
.write_all(encoded_str.as_bytes())
.with_context(|| format!("Could not write file {p:?}"))?;
Ok(())
}
}
impl<const N: usize> StoreValueB64Writer for Public<N> {
type Error = anyhow::Error;
fn store_b64_writer<const F: usize, W: std::io::Write>(
&self,
mut writer: W,
) -> Result<(), Self::Error> {
let mut f = [0u8; F];
let encoded_str =
b64_encode(&self.value, &mut f).with_context(|| "Could not encode secret to base64")?;
writer
.write_all(encoded_str.as_bytes())
.with_context(|| "Could not write base64 to writer")?;
Ok(())
}
}
#[cfg(test)]
mod tests {
#[cfg(test)]
mod tests {
use crate::Public;
use rosenpass_util::{
b64::b64_encode,
file::{
fopen_w, LoadValue, LoadValueB64, StoreValue, StoreValueB64, StoreValueB64Writer,
Visibility,
},
};
use std::{fs, os::unix::fs::PermissionsExt};
use tempfile::tempdir;
/// test loading a public from an example file, and then storing it again in a different file
#[test]
fn test_public_load_store() {
const N: usize = 100;
// Generate original random bytes
let original_bytes: [u8; N] = [rand::random(); N];
// Create a temporary directory
let temp_dir = tempdir().unwrap();
// Store the original public to an example file in the temporary directory
let example_file = temp_dir.path().join("example_file");
std::fs::write(example_file.clone(), &original_bytes).unwrap();
// Load the public from the example file
let loaded_public = Public::load(&example_file).unwrap();
// Check that the loaded public matches the original bytes
assert_eq!(&loaded_public.value, &original_bytes);
// Store the loaded public to a different file in the temporary directory
let new_file = temp_dir.path().join("new_file");
loaded_public.store(&new_file).unwrap();
// Read the contents of the new file
let new_file_contents = fs::read(&new_file).unwrap();
// Read the contents of the original file
let original_file_contents = fs::read(&example_file).unwrap();
// Check that the contents of the new file match the original file
assert_eq!(new_file_contents, original_file_contents);
}
/// test loading a base64 encoded public from an example file, and then storing it again in a different file
#[test]
fn test_public_load_store_base64() {
const N: usize = 100;
// Generate original random bytes
let original_bytes: [u8; N] = [rand::random(); N];
// Create a temporary directory
let temp_dir = tempdir().unwrap();
let example_file = temp_dir.path().join("example_file");
let mut encoded_public = [0u8; N * 2];
let encoded_public = b64_encode(&original_bytes, &mut encoded_public).unwrap();
std::fs::write(&example_file, encoded_public).unwrap();
// Load the public from the example file
let loaded_public = Public::load_b64::<{ N * 2 }, _>(&example_file).unwrap();
// Check that the loaded public matches the original bytes
assert_eq!(&loaded_public.value, &original_bytes);
// Store the loaded public to a different file in the temporary directory
let new_file = temp_dir.path().join("new_file");
loaded_public.store_b64::<{ N * 2 }, _>(&new_file).unwrap();
// Read the contents of the new file
let new_file_contents = fs::read(&new_file).unwrap();
// Read the contents of the original file
let original_file_contents = fs::read(&example_file).unwrap();
// Check that the contents of the new file match the original file
assert_eq!(new_file_contents, original_file_contents);
//Check new file permissions are public
let metadata = fs::metadata(&new_file).unwrap();
assert_eq!(metadata.permissions().mode() & 0o000777, 0o644);
// Store the loaded public to a different file in the temporary directory for a second time
let new_file = temp_dir.path().join("new_file_writer");
let new_file_writer = fopen_w(new_file.clone(), Visibility::Public).unwrap();
loaded_public
.store_b64_writer::<{ N * 2 }, _>(&new_file_writer)
.unwrap();
// Read the contents of the new file
let new_file_contents = fs::read(&new_file).unwrap();
// Read the contents of the original file
let original_file_contents = fs::read(&example_file).unwrap();
// Check that the contents of the new file match the original file
assert_eq!(new_file_contents, original_file_contents);
//Check new file permissions are public
let metadata = fs::metadata(&new_file).unwrap();
assert_eq!(metadata.permissions().mode() & 0o000777, 0o644);
}
}
}

View File

@@ -9,13 +9,18 @@ use anyhow::Context;
use rand::{Fill as Randomize, Rng};
use zeroize::{Zeroize, ZeroizeOnDrop};
use rosenpass_util::b64::b64_reader;
use rosenpass_util::file::{fopen_r, LoadValue, LoadValueB64, ReadExactToEnd};
use rosenpass_util::b64::{b64_decode, b64_encode};
use rosenpass_util::file::{
fopen_r, LoadValue, LoadValueB64, ReadExactToEnd, ReadSliceToEnd, StoreValueB64,
StoreValueB64Writer,
};
use rosenpass_util::functional::mutating;
use crate::alloc::{secret_box, SecretBox, SecretVec};
use crate::file::StoreSecret;
use rosenpass_util::file::{fopen_w, Visibility};
use std::io::Write;
// This might become a problem in library usage; it's effectively a memory
// leak which probably isn't a problem right now because most memory will
// be reused…
@@ -249,73 +254,214 @@ impl<const N: usize> LoadValue for Secret<N> {
impl<const N: usize> LoadValueB64 for Secret<N> {
type Error = anyhow::Error;
fn load_b64<P: AsRef<Path>>(path: P) -> anyhow::Result<Self> {
use std::io::Read;
fn load_b64<const F: usize, P: AsRef<Path>>(path: P) -> anyhow::Result<Self> {
let mut f: Secret<F> = Secret::random();
let mut v = Self::random();
let p = path.as_ref();
// This might leave some fragments of the secret on the stack;
// in practice this is likely not a problem because the stack likely
// will be overwritten by something else soon but this is not exactly
// guaranteed. It would be possible to remedy this, but since the secret
// data will linger in the Linux page cache anyways with the current
// implementation, going to great length to erase the secret here is
// not worth it right now.
b64_reader(&mut fopen_r(p)?)
.read_exact(v.secret_mut())
.with_context(|| format!("Could not load base64 file {p:?}"))?;
let len = fopen_r(p)?
.read_slice_to_end(f.secret_mut())
.with_context(|| format!("Could not load file {p:?}"))?;
b64_decode(&f.secret()[0..len], v.secret_mut())
.with_context(|| format!("Could not decode base64 file {p:?}"))?;
Ok(v)
}
}
impl<const N: usize> StoreValueB64 for Secret<N> {
type Error = anyhow::Error;
fn store_b64<const F: usize, P: AsRef<Path>>(&self, path: P) -> anyhow::Result<()> {
let p = path.as_ref();
let mut f: Secret<F> = Secret::random();
let encoded_str = b64_encode(self.secret(), f.secret_mut())
.with_context(|| format!("Could not encode base64 file {p:?}"))?;
fopen_w(p, Visibility::Secret)?
.write_all(encoded_str.as_bytes())
.with_context(|| format!("Could not write file {p:?}"))?;
f.zeroize();
Ok(())
}
}
impl<const N: usize> StoreValueB64Writer for Secret<N> {
type Error = anyhow::Error;
fn store_b64_writer<const F: usize, W: Write>(&self, mut writer: W) -> anyhow::Result<()> {
let mut f: Secret<F> = Secret::random();
let encoded_str = b64_encode(self.secret(), f.secret_mut())
.with_context(|| "Could not encode secret to base64")?;
writer
.write_all(encoded_str.as_bytes())
.with_context(|| "Could not write base64 to writer")?;
f.zeroize();
Ok(())
}
}
impl<const N: usize> StoreSecret for Secret<N> {
type Error = anyhow::Error;
fn store_secret<P: AsRef<Path>>(&self, path: P) -> anyhow::Result<()> {
std::fs::write(path, self.secret())?;
fopen_w(path, Visibility::Secret)?.write_all(self.secret())?;
Ok(())
}
fn store<P: AsRef<Path>>(&self, path: P) -> anyhow::Result<()> {
fopen_w(path, Visibility::Public)?.write_all(self.secret())?;
Ok(())
}
}
#[cfg(test)]
mod test {
use crate::test_spawn_process_provided_policies;
use super::*;
use std::{fs, os::unix::fs::PermissionsExt};
use tempfile::tempdir;
procspawn::enable_test_support!();
/// check that we can alloc using the magic pool
#[test]
fn secret_memory_pool_take() {
const N: usize = 0x100;
let mut pool = SecretMemoryPool::new();
let secret: ZeroizingSecretBox<[u8; N]> = pool.take();
assert_eq!(secret.as_ref(), &[0; N]);
test_spawn_process_provided_policies!({
const N: usize = 0x100;
let mut pool = SecretMemoryPool::new();
let secret: ZeroizingSecretBox<[u8; N]> = pool.take();
assert_eq!(secret.as_ref(), &[0; N]);
});
}
/// check that a secrete lives, even if its [SecretMemoryPool] is deleted
/// check that a secret lives, even if its [SecretMemoryPool] is deleted
#[test]
fn secret_memory_pool_drop() {
const N: usize = 0x100;
let mut pool = SecretMemoryPool::new();
let secret: ZeroizingSecretBox<[u8; N]> = pool.take();
std::mem::drop(pool);
assert_eq!(secret.as_ref(), &[0; N]);
test_spawn_process_provided_policies!({
const N: usize = 0x100;
let mut pool = SecretMemoryPool::new();
let secret: ZeroizingSecretBox<[u8; N]> = pool.take();
std::mem::drop(pool);
assert_eq!(secret.as_ref(), &[0; N]);
});
}
/// check that a secrete can be reborn, freshly initialized with zero
/// check that a secret can be reborn, freshly initialized with zero
#[test]
fn secret_memory_pool_release() {
const N: usize = 1;
let mut pool = SecretMemoryPool::new();
let mut secret: ZeroizingSecretBox<[u8; N]> = pool.take();
let old_secret_ptr = secret.as_ref().as_ptr();
test_spawn_process_provided_policies!({
const N: usize = 1;
let mut pool = SecretMemoryPool::new();
let mut secret: ZeroizingSecretBox<[u8; N]> = pool.take();
let old_secret_ptr = secret.as_ref().as_ptr();
secret.as_mut()[0] = 0x13;
pool.release(secret);
secret.as_mut()[0] = 0x13;
pool.release(secret);
// now check that we get the same ptr
let new_secret: ZeroizingSecretBox<[u8; N]> = pool.take();
assert_eq!(old_secret_ptr, new_secret.as_ref().as_ptr());
// now check that we get the same ptr
let new_secret: ZeroizingSecretBox<[u8; N]> = pool.take();
assert_eq!(old_secret_ptr, new_secret.as_ref().as_ptr());
// and that the secret was zeroized
assert_eq!(new_secret.as_ref(), &[0; N]);
// and that the secret was zeroized
assert_eq!(new_secret.as_ref(), &[0; N]);
});
}
/// test loading a secret from an example file, and then storing it again in a different file
#[test]
fn test_secret_load_store() {
test_spawn_process_provided_policies!({
const N: usize = 100;
// Generate original random bytes
let original_bytes: [u8; N] = [rand::random(); N];
// Create a temporary directory
let temp_dir = tempdir().unwrap();
// Store the original secret to an example file in the temporary directory
let example_file = temp_dir.path().join("example_file");
std::fs::write(example_file.clone(), &original_bytes).unwrap();
// Load the secret from the example file
let loaded_secret = Secret::load(&example_file).unwrap();
// Check that the loaded secret matches the original bytes
assert_eq!(loaded_secret.secret(), &original_bytes);
// Store the loaded secret to a different file in the temporary directory
let new_file = temp_dir.path().join("new_file");
loaded_secret.store(&new_file).unwrap();
// Read the contents of the new file
let new_file_contents = fs::read(&new_file).unwrap();
// Read the contents of the original file
let original_file_contents = fs::read(&example_file).unwrap();
// Check that the contents of the new file match the original file
assert_eq!(new_file_contents, original_file_contents);
});
}
/// test loading a base64 encoded secret from an example file, and then storing it again in a different file
#[test]
fn test_secret_load_store_base64() {
test_spawn_process_provided_policies!({
const N: usize = 100;
// Generate original random bytes
let original_bytes: [u8; N] = [rand::random(); N];
// Create a temporary directory
let temp_dir = tempdir().unwrap();
let example_file = temp_dir.path().join("example_file");
let mut encoded_secret = [0u8; N * 2];
let encoded_secret = b64_encode(&original_bytes, &mut encoded_secret).unwrap();
std::fs::write(&example_file, encoded_secret).unwrap();
// Load the secret from the example file
let loaded_secret = Secret::load_b64::<{ N * 2 }, _>(&example_file).unwrap();
// Check that the loaded secret matches the original bytes
assert_eq!(loaded_secret.secret(), &original_bytes);
// Store the loaded secret to a different file in the temporary directory
let new_file = temp_dir.path().join("new_file");
loaded_secret.store_b64::<{ N * 2 }, _>(&new_file).unwrap();
// Read the contents of the new file
let new_file_contents = fs::read(&new_file).unwrap();
// Read the contents of the original file
let original_file_contents = fs::read(&example_file).unwrap();
// Check that the contents of the new file match the original file
assert_eq!(new_file_contents, original_file_contents);
//Check new file permissions are secret
let metadata = fs::metadata(&new_file).unwrap();
assert_eq!(metadata.permissions().mode() & 0o000777, 0o600);
// Store the loaded secret to a different file in the temporary directory for a second time
let new_file = temp_dir.path().join("new_file_writer");
let new_file_writer = fopen_w(new_file.clone(), Visibility::Secret).unwrap();
loaded_secret
.store_b64_writer::<{ N * 2 }, _>(&new_file_writer)
.unwrap();
// Read the contents of the new file
let new_file_contents = fs::read(&new_file).unwrap();
// Read the contents of the original file
let original_file_contents = fs::read(&example_file).unwrap();
// Check that the contents of the new file match the original file
assert_eq!(new_file_contents, original_file_contents);
//Check new file permissions are secret
let metadata = fs::metadata(&new_file).unwrap();
assert_eq!(metadata.permissions().mode() & 0o000777, 0o600);
});
}
}

View File

@@ -12,7 +12,9 @@ readme = "readme.md"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
base64 = { workspace = true }
base64ct = { workspace = true }
anyhow = { workspace = true }
typenum = { workspace = true }
static_assertions = { workspace = true }
rustix = {workspace = true}
zeroize = {workspace = true}

View File

@@ -1,20 +1,130 @@
use base64::{
display::Base64Display as B64Display, read::DecoderReader as B64Reader,
write::EncoderWriter as B64Writer,
};
use std::io::{Read, Write};
use base64ct::{Base64, Decoder as B64Reader, Encoder as B64Writer};
use zeroize::Zeroize;
use base64::engine::general_purpose::GeneralPurpose as Base64Engine;
const B64ENGINE: Base64Engine = base64::engine::general_purpose::STANDARD;
use std::fmt::Display;
pub fn fmt_b64<'a>(payload: &'a [u8]) -> B64Display<'a, 'static, Base64Engine> {
B64Display::<'a, 'static>::new(payload, &B64ENGINE)
pub struct B64DisplayHelper<'a, const F: usize>(&'a [u8]);
impl<const F: usize> Display for B64DisplayHelper<'_, F> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut bytes = [0u8; F];
let string = b64_encode(self.0, &mut bytes).map_err(|_| std::fmt::Error)?;
let result = f.write_str(string);
bytes.zeroize();
result
}
}
pub fn b64_writer<W: Write>(w: W) -> B64Writer<'static, Base64Engine, W> {
B64Writer::new(w, &B64ENGINE)
pub trait B64Display {
fn fmt_b64<const F: usize>(&self) -> B64DisplayHelper<F>;
}
pub fn b64_reader<R: Read>(r: R) -> B64Reader<'static, Base64Engine, R> {
B64Reader::new(r, &B64ENGINE)
impl B64Display for [u8] {
fn fmt_b64<const F: usize>(&self) -> B64DisplayHelper<F> {
B64DisplayHelper(self)
}
}
impl<T: AsRef<[u8]>> B64Display for T {
fn fmt_b64<const F: usize>(&self) -> B64DisplayHelper<F> {
B64DisplayHelper(self.as_ref())
}
}
pub fn b64_decode(input: &[u8], output: &mut [u8]) -> anyhow::Result<()> {
let mut reader = B64Reader::<Base64>::new(input).map_err(|e| anyhow::anyhow!(e))?;
match reader.decode(output) {
Ok(_) => (),
Err(base64ct::Error::InvalidLength) => (),
Err(e) => {
return Err(anyhow::anyhow!(e));
}
}
if reader.is_finished() {
Ok(())
} else {
Err(anyhow::anyhow!(
"Input not decoded completely (buffer size too small?)"
))
}
}
pub fn b64_encode<'o>(input: &[u8], output: &'o mut [u8]) -> anyhow::Result<&'o str> {
let mut writer = B64Writer::<Base64>::new(output).map_err(|e| anyhow::anyhow!(e))?;
writer.encode(input).map_err(|e| anyhow::anyhow!(e))?;
writer.finish().map_err(|e| anyhow::anyhow!(e))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_b64_encode() {
let input = b"Hello, World!";
let mut output = [0u8; 20];
let result = b64_encode(input, &mut output);
assert!(result.is_ok());
assert_eq!(result.unwrap(), "SGVsbG8sIFdvcmxkIQ==");
}
#[test]
fn test_b64_encode_small_buffer() {
let input = b"Hello, World!";
let mut output = [0u8; 10]; // Small output buffer
let result = b64_encode(input, &mut output);
assert!(result.is_err());
assert_eq!(result.unwrap_err().to_string(), "invalid Base64 length");
}
#[test]
fn test_b64_encode_empty_buffer() {
let input = b"";
let mut output = [0u8; 16];
let result = b64_encode(input, &mut output);
assert!(result.is_ok());
assert_eq!(result.unwrap(), "");
}
#[test]
fn test_b64_decode() {
let input = b"SGVsbG8sIFdvcmxkIQ==";
let mut output = [0u8; 1000];
b64_decode(input, &mut output).unwrap();
assert_eq!(&output[..13], b"Hello, World!");
}
#[test]
fn test_b64_decode_small_buffer() {
let input = b"SGVsbG8sIFdvcmxkIQ==";
let mut output = [0u8; 10]; // Small output buffer
let result = b64_decode(input, &mut output);
assert!(result.is_err());
assert_eq!(
result.unwrap_err().to_string(),
"Input not decoded completely (buffer size too small?)"
);
}
#[test]
fn test_b64_decode_empty_buffer() {
let input = b"";
let mut output = [0u8; 16];
let result = b64_decode(input, &mut output);
assert!(result.is_err());
}
#[test]
fn test_fmt_b64() {
let input = b"Hello, World!";
let result = input.fmt_b64::<20>().to_string();
assert_eq!(result, "SGVsbG8sIFdvcmxkIQ==");
}
#[test]
fn test_fmt_b64_empty_input() {
let input = b"";
let result = input.fmt_b64::<16>().to_string();
assert_eq!(result, "");
}
}

12
util/src/fd.rs Normal file
View File

@@ -0,0 +1,12 @@
use std::os::fd::{OwnedFd, RawFd};
/// Clone some file descriptor
///
/// If the file descriptor is invalid, an error will be raised.
pub fn claim_fd(fd: RawFd) -> anyhow::Result<OwnedFd> {
use rustix::{fd::BorrowedFd, io::dup};
// This is safe since [dup] will simply raise
let fd = unsafe { dup(BorrowedFd::borrow_raw(fd))? };
Ok(fd)
}

View File

@@ -1,17 +1,24 @@
use anyhow::ensure;
use std::fs::File;
use std::io::Read;
use std::os::unix::fs::OpenOptionsExt;
use std::result::Result;
use std::{fs::OpenOptions, path::Path};
pub enum Visibility {
Public,
Secret,
}
/// Open a file writable
pub fn fopen_w<P: AsRef<Path>>(path: P) -> std::io::Result<File> {
OpenOptions::new()
.read(false)
.write(true)
.create(true)
.truncate(true)
.open(path)
pub fn fopen_w<P: AsRef<Path>>(path: P, visibility: Visibility) -> std::io::Result<File> {
let mut options = OpenOptions::new();
options.create(true).write(true).read(false).truncate(true);
match visibility {
Visibility::Public => options.mode(0o644),
Visibility::Secret => options.mode(0o600),
};
options.open(path)
}
/// Open a file readable
pub fn fopen_r<P: AsRef<Path>>(path: P) -> std::io::Result<File> {
@@ -23,6 +30,30 @@ pub fn fopen_r<P: AsRef<Path>>(path: P) -> std::io::Result<File> {
.open(path)
}
pub trait ReadSliceToEnd {
type Error;
fn read_slice_to_end(&mut self, buf: &mut [u8]) -> Result<usize, Self::Error>;
}
impl<R: Read> ReadSliceToEnd for R {
type Error = anyhow::Error;
fn read_slice_to_end(&mut self, buf: &mut [u8]) -> anyhow::Result<usize> {
let mut dummy = [0u8; 8];
let mut read = 0;
while read < buf.len() {
let bytes_read = self.read(&mut buf[read..])?;
if bytes_read == 0 {
break;
}
read += bytes_read;
}
ensure!(self.read(&mut dummy)? == 0, "File too long!");
Ok(read)
}
}
pub trait ReadExactToEnd {
type Error;
@@ -51,13 +82,36 @@ pub trait LoadValue {
pub trait LoadValueB64 {
type Error;
fn load_b64<P: AsRef<Path>>(path: P) -> Result<Self, Self::Error>
fn load_b64<const F: usize, P: AsRef<Path>>(path: P) -> Result<Self, Self::Error>
where
Self: Sized;
}
pub trait StoreValueB64 {
type Error;
fn store_b64<const F: usize, P: AsRef<Path>>(&self, path: P) -> Result<(), Self::Error>
where
Self: Sized;
}
pub trait StoreValueB64Writer {
type Error;
fn store_b64_writer<const F: usize, W: std::io::Write>(
&self,
writer: W,
) -> Result<(), Self::Error>;
}
pub trait StoreValue {
type Error;
fn store<P: AsRef<Path>>(&self, path: P) -> Result<(), Self::Error>;
}
pub trait DisplayValueB64 {
type Error;
fn display_b64<'o>(&self, output: &'o mut [u8]) -> Result<&'o str, Self::Error>;
}

View File

@@ -1,6 +1,7 @@
#![recursion_limit = "256"]
pub mod b64;
pub mod fd;
pub mod file;
pub mod functional;
pub mod mem;

View File

@@ -148,6 +148,7 @@ where
const VALUE: Ret = <ConstApplyNegSign<Ret, Unsigned> as IntoConst<Ret>>::VALUE;
}
#[allow(clippy::identity_op)]
mod test {
use static_assertions::const_assert_eq;
use typenum::consts::*;

View File

@@ -0,0 +1,53 @@
[package]
name = "rosenpass-wireguard-broker"
authors = ["Karolin Varner <karo@cupdev.net>", "wucke13 <wucke13@gmail.com>"]
version = "0.1.0"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Rosenpass internal broker that runs as root and supplies exchanged keys to the kernel."
homepage = "https://rosenpass.eu/"
repository = "https://github.com/rosenpass/rosenpass"
readme = "readme.md"
[dependencies]
thiserror = { workspace = true }
zerocopy = { workspace = true }
rosenpass-secret-memory = {workspace = true}
# Privileged only
wireguard-uapi = { workspace = true }
# Socket handler only
rosenpass-to = { workspace = true }
tokio = { version = "1.38.0", features = ["sync", "full", "mio"] }
anyhow = { workspace = true }
clap = { workspace = true }
env_logger = { workspace = true }
log = { workspace = true }
derive_builder = {workspace = true}
postcard = {workspace = true}
# Mio broker client
mio = { workspace = true }
rosenpass-util = { workspace = true }
[dev-dependencies]
rand = {workspace = true}
procspawn = {workspace = true}
[features]
enable_broker_api=[]
[[bin]]
name = "rosenpass-wireguard-broker-privileged"
path = "src/bin/priviledged.rs"
test = false
doc = false
required-features=["enable_broker_api"]
[[bin]]
name = "rosenpass-wireguard-broker-socket-handler"
test = false
path = "src/bin/socket_handler.rs"
doc = false
required-features=["enable_broker_api"]

View File

@@ -0,0 +1,5 @@
# Rosenpass internal broker supplying WireGuard with keys.
This crate contains a small application purpose-built to supply WireGuard in the linux kernel with pre-shared keys.
This is an internal library; not guarantee is made about its API at this point in time.

View File

@@ -0,0 +1,149 @@
use std::{borrow::BorrowMut, fmt::Debug};
use crate::{
api::{
config::NetworkBrokerConfig,
msgs::{self, REQUEST_MSG_BUFFER_SIZE},
},
SerializedBrokerConfig, WireGuardBroker,
};
use super::{
config::NetworkBrokerConfigErr,
msgs::{Envelope, SetPskResponse},
};
#[derive(thiserror::Error, Debug, Clone, Eq, PartialEq)]
pub enum BrokerClientPollResponseError<RecvError> {
#[error(transparent)]
IoError(RecvError),
#[error("Invalid message.")]
InvalidMessage,
}
impl<RecvError> From<msgs::InvalidMessageTypeError> for BrokerClientPollResponseError<RecvError> {
fn from(value: msgs::InvalidMessageTypeError) -> Self {
let msgs::InvalidMessageTypeError = value; // Assert that this is a unit type
BrokerClientPollResponseError::<RecvError>::InvalidMessage
}
}
fn io_poller<RecvError>(e: RecvError) -> BrokerClientPollResponseError<RecvError> {
BrokerClientPollResponseError::<RecvError>::IoError(e)
}
fn invalid_msg_poller<RecvError>() -> BrokerClientPollResponseError<RecvError> {
BrokerClientPollResponseError::<RecvError>::InvalidMessage
}
#[derive(thiserror::Error, Debug, Clone, Eq, PartialEq)]
pub enum BrokerClientSetPskError<SendError> {
#[error("Error with encoding/decoding message")]
MsgError,
#[error("Network Broker Config error: {0}")]
BrokerError(NetworkBrokerConfigErr),
#[error(transparent)]
IoError(SendError),
#[error("Interface name out of bounds")]
IfaceOutOfBounds,
}
pub trait BrokerClientIo {
type SendError;
type RecvError;
fn send_msg(&mut self, buf: &[u8]) -> Result<(), Self::SendError>;
fn recv_msg(&mut self) -> Result<Option<&[u8]>, Self::RecvError>;
}
#[derive(Debug)]
pub struct BrokerClient<Io>
where
Io: BrokerClientIo + Debug,
{
io: Io,
}
impl<Io> BrokerClient<Io>
where
Io: BrokerClientIo + Debug,
{
pub fn new(io: Io) -> Self {
Self { io }
}
pub fn io(&self) -> &Io {
&self.io
}
pub fn io_mut(&mut self) -> &mut Io {
&mut self.io
}
pub fn poll_response(
&mut self,
) -> Result<Option<msgs::SetPskResult>, BrokerClientPollResponseError<Io::RecvError>> {
let res: &[u8] = match self.io.borrow_mut().recv_msg().map_err(io_poller)? {
Some(r) => r,
None => return Ok(None),
};
let typ = res.get(0).ok_or(invalid_msg_poller())?;
let typ = msgs::MsgType::try_from(*typ)?;
let msgs::MsgType::SetPsk = typ; // Assert type
let res = zerocopy::Ref::<&[u8], Envelope<SetPskResponse>>::new(res)
.ok_or(invalid_msg_poller())?;
let res: &msgs::SetPskResponse = &res.payload;
let res: msgs::SetPskResponseReturnCode = res
.return_code
.try_into()
.map_err(|_| invalid_msg_poller())?;
let res: msgs::SetPskResult = res.into();
Ok(Some(res))
}
}
impl<Io> WireGuardBroker for BrokerClient<Io>
where
Io: BrokerClientIo + Debug,
{
type Error = BrokerClientSetPskError<Io::SendError>;
fn set_psk(&mut self, config: SerializedBrokerConfig) -> Result<(), Self::Error> {
let config: Result<NetworkBrokerConfig, NetworkBrokerConfigErr> = config.try_into();
let config = config.map_err(|e| BrokerClientSetPskError::BrokerError(e))?;
use BrokerClientSetPskError::*;
const BUF_SIZE: usize = REQUEST_MSG_BUFFER_SIZE;
// Allocate message
let mut req = [0u8; BUF_SIZE];
// Construct message view
let mut req = zerocopy::Ref::<&mut [u8], Envelope<msgs::SetPskRequest>>::new(&mut req)
.ok_or(MsgError)?;
// Populate envelope
req.msg_type = msgs::MsgType::SetPsk as u8;
{
// Derived payload
let req = &mut req.payload;
// Populate payload
req.peer_id.copy_from_slice(&config.peer_id.value);
req.psk.copy_from_slice(config.psk.secret());
req.set_iface(config.iface.as_ref())
.ok_or(IfaceOutOfBounds)?;
}
// Send message
self.io
.borrow_mut()
.send_msg(req.bytes())
.map_err(IoError)?;
Ok(())
}
}

View File

@@ -0,0 +1,43 @@
use crate::{SerializedBrokerConfig, WG_KEY_LEN, WG_PEER_LEN};
use derive_builder::Builder;
use rosenpass_secret_memory::{Public, Secret};
#[derive(Builder)]
#[builder(pattern = "mutable")]
//TODO: Use generics for iface, add additional params
pub struct NetworkBrokerConfig<'a> {
pub iface: &'a str,
pub peer_id: &'a Public<WG_PEER_LEN>,
pub psk: &'a Secret<WG_KEY_LEN>,
}
impl<'a> Into<SerializedBrokerConfig<'a>> for NetworkBrokerConfig<'a> {
fn into(self) -> SerializedBrokerConfig<'a> {
SerializedBrokerConfig {
interface: self.iface.as_bytes(),
peer_id: self.peer_id,
psk: self.psk,
additional_params: &[],
}
}
}
#[derive(thiserror::Error, Debug, Clone, Eq, PartialEq)]
pub enum NetworkBrokerConfigErr {
#[error("Interface")]
Interface,
}
impl<'a> TryFrom<SerializedBrokerConfig<'a>> for NetworkBrokerConfig<'a> {
type Error = NetworkBrokerConfigErr;
fn try_from(value: SerializedBrokerConfig<'a>) -> Result<Self, Self::Error> {
let iface =
std::str::from_utf8(value.interface).map_err(|_| NetworkBrokerConfigErr::Interface)?;
Ok(Self {
iface,
peer_id: value.peer_id,
psk: value.psk,
})
}
}

View File

@@ -0,0 +1,4 @@
pub mod client;
pub mod config;
pub mod msgs;
pub mod server;

View File

@@ -0,0 +1,145 @@
use std::result::Result;
use std::str::{from_utf8, Utf8Error};
use zerocopy::{AsBytes, FromBytes, FromZeroes};
pub const ENVELOPE_OVERHEAD: usize = 1 + 3;
pub const REQUEST_MSG_BUFFER_SIZE: usize = ENVELOPE_OVERHEAD + 32 + 32 + 1 + 255;
pub const RESPONSE_MSG_BUFFER_SIZE: usize = ENVELOPE_OVERHEAD + 1;
#[repr(packed)]
#[derive(AsBytes, FromBytes, FromZeroes)]
pub struct Envelope<M: AsBytes + FromBytes> {
/// [MsgType] of this message
pub msg_type: u8,
/// Reserved for future use
pub reserved: [u8; 3],
/// The actual Paylod
pub payload: M,
}
#[repr(packed)]
#[derive(AsBytes, FromBytes, FromZeroes)]
pub struct SetPskRequest {
pub peer_id: [u8; 32],
pub psk: [u8; 32],
pub iface_size: u8, // TODO: We should have variable length strings in lenses
pub iface_buf: [u8; 255],
}
impl SetPskRequest {
pub fn iface_bin(&self) -> &[u8] {
let len = self.iface_size as usize;
&self.iface_buf[..len]
}
pub fn iface(&self) -> Result<&str, Utf8Error> {
from_utf8(self.iface_bin())
}
pub fn set_iface_bin(&mut self, iface: &[u8]) -> Option<()> {
(iface.len() < 256).then_some(())?; // Assert iface.len() < 256
self.iface_size = iface.len() as u8;
self.iface_buf = [0; 255];
(&mut self.iface_buf[..iface.len()]).copy_from_slice(iface);
Some(())
}
pub fn set_iface(&mut self, iface: &str) -> Option<()> {
self.set_iface_bin(iface.as_bytes())
}
}
#[repr(packed)]
#[derive(AsBytes, FromBytes, FromZeroes)]
pub struct SetPskResponse {
pub return_code: u8,
}
#[derive(thiserror::Error, Debug, Clone, Eq, PartialEq)]
pub enum SetPskError {
#[error("The wireguard pre-shared-key assignment broker experienced an internal error.")]
InternalError,
#[error("The indicated wireguard interface does not exist")]
NoSuchInterface,
#[error("The indicated peer does not exist on the wireguard interface")]
NoSuchPeer,
}
pub type SetPskResult = Result<(), SetPskError>;
#[repr(u8)]
#[derive(Hash, PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)]
pub enum SetPskResponseReturnCode {
Success = 0x00,
InternalError = 0x01,
NoSuchInterface = 0x02,
NoSuchPeer = 0x03,
}
#[derive(Eq, PartialEq, Debug, Clone)]
pub struct InvalidSetPskResponseError;
impl TryFrom<u8> for SetPskResponseReturnCode {
type Error = InvalidSetPskResponseError;
fn try_from(value: u8) -> Result<Self, Self::Error> {
use SetPskResponseReturnCode::*;
match value {
0x00 => Ok(Success),
0x01 => Ok(InternalError),
0x02 => Ok(NoSuchInterface),
0x03 => Ok(NoSuchPeer),
_ => Err(InvalidSetPskResponseError),
}
}
}
impl From<SetPskResponseReturnCode> for SetPskResult {
fn from(value: SetPskResponseReturnCode) -> Self {
use SetPskError as E;
use SetPskResponseReturnCode as C;
match value {
C::Success => Ok(()),
C::InternalError => Err(E::InternalError),
C::NoSuchInterface => Err(E::NoSuchInterface),
C::NoSuchPeer => Err(E::NoSuchPeer),
}
}
}
impl From<SetPskResult> for SetPskResponseReturnCode {
fn from(value: SetPskResult) -> Self {
use SetPskError as E;
use SetPskResponseReturnCode as C;
match value {
Ok(()) => C::Success,
Err(E::InternalError) => C::InternalError,
Err(E::NoSuchInterface) => C::NoSuchInterface,
Err(E::NoSuchPeer) => C::NoSuchPeer,
}
}
}
#[repr(u8)]
#[derive(Hash, PartialEq, Eq, PartialOrd, Ord, Debug, Clone, Copy)]
pub enum MsgType {
SetPsk = 0x01,
}
#[derive(Eq, PartialEq, Debug, Clone)]
pub struct InvalidMessageTypeError;
impl TryFrom<u8> for MsgType {
type Error = InvalidMessageTypeError;
fn try_from(value: u8) -> Result<Self, Self::Error> {
match value {
0x01 => Ok(MsgType::SetPsk),
_ => Err(InvalidMessageTypeError),
}
}
}

View File

@@ -0,0 +1,93 @@
use std::borrow::BorrowMut;
use std::result::Result;
use rosenpass_secret_memory::{Public, Secret};
use crate::api::msgs::{self, Envelope, SetPskRequest, SetPskResponse};
use crate::WireGuardBroker;
use super::config::{NetworkBrokerConfigBuilder, NetworkBrokerConfigErr};
#[derive(thiserror::Error, Debug, Clone, Eq, PartialEq)]
pub enum BrokerServerError {
#[error("No such request type: {}", .0)]
NoSuchRequestType(u8),
#[error("Invalid message received.")]
InvalidMessage,
#[error("Network Broker Config error: {0}")]
BrokerError(NetworkBrokerConfigErr),
}
impl From<msgs::InvalidMessageTypeError> for BrokerServerError {
fn from(value: msgs::InvalidMessageTypeError) -> Self {
let msgs::InvalidMessageTypeError = value; // Assert that this is a unit type
BrokerServerError::InvalidMessage
}
}
pub struct BrokerServer<Err, Inner>
where
Inner: WireGuardBroker<Error = Err>,
msgs::SetPskError: From<Err>,
{
inner: Inner,
}
impl<Err, Inner> BrokerServer<Err, Inner>
where
Inner: WireGuardBroker<Error = Err>,
msgs::SetPskError: From<Err>,
{
pub fn new(inner: Inner) -> Self {
Self { inner }
}
pub fn handle_message(
&mut self,
req: &[u8],
res: &mut [u8; msgs::RESPONSE_MSG_BUFFER_SIZE],
) -> Result<usize, BrokerServerError> {
use BrokerServerError::*;
let typ = req.get(0).ok_or(InvalidMessage)?;
let typ = msgs::MsgType::try_from(*typ)?;
let msgs::MsgType::SetPsk = typ; // Assert type
let req = zerocopy::Ref::<&[u8], Envelope<SetPskRequest>>::new(req)
.ok_or(BrokerServerError::InvalidMessage)?;
let mut res = zerocopy::Ref::<&mut [u8], Envelope<SetPskResponse>>::new(res)
.ok_or(BrokerServerError::InvalidMessage)?;
res.payload.return_code = msgs::MsgType::SetPsk as u8;
self.handle_set_psk(&req.payload, &mut res.payload)?;
Ok(res.bytes().len())
}
fn handle_set_psk(
&mut self,
req: &SetPskRequest,
res: &mut SetPskResponse,
) -> Result<(), BrokerServerError> {
// Using unwrap here since lenses can not return fixed-size arrays
// TODO: Slices should give access to fixed size arrays
let peer_id = Public::from_slice(&req.peer_id);
let psk = Secret::from_slice(&req.psk);
let interface = req
.iface()
.map_err(|_e| BrokerServerError::InvalidMessage)?;
let config = NetworkBrokerConfigBuilder::default()
.peer_id(&peer_id)
.psk(&psk)
.iface(interface)
.build()
.unwrap();
let r: Result<(), Err> = self.inner.borrow_mut().set_psk(config.into());
let r: msgs::SetPskResult = r.map_err(|e| e.into());
let r: msgs::SetPskResponseReturnCode = r.into();
res.return_code = r as u8;
Ok(())
}
}

View File

@@ -0,0 +1,56 @@
use std::io::{stdin, stdout, Read, Write};
use std::result::Result;
use rosenpass_wireguard_broker::api::msgs;
use rosenpass_wireguard_broker::api::server::BrokerServer;
use rosenpass_wireguard_broker::brokers::netlink as wg;
#[derive(thiserror::Error, Debug)]
pub enum BrokerAppError {
#[error(transparent)]
IoError(#[from] std::io::Error),
#[error(transparent)]
WgConnectError(#[from] wg::ConnectError),
#[error(transparent)]
WgSetPskError(#[from] wg::SetPskError),
#[error("Oversized message {}; something about the request is fatally wrong", .0)]
OversizedMessage(u64),
}
fn main() -> Result<(), BrokerAppError> {
let mut broker = BrokerServer::new(wg::NetlinkWireGuardBroker::new()?);
let mut stdin = stdin().lock();
let mut stdout = stdout().lock();
loop {
// Read the message length
let mut len = [0u8; 8];
stdin.read_exact(&mut len)?;
// Parse the message length
let len = u64::from_le_bytes(len);
if (len as usize) > msgs::REQUEST_MSG_BUFFER_SIZE {
return Err(BrokerAppError::OversizedMessage(len));
}
// Read the message itself
let mut req_buf = [0u8; msgs::REQUEST_MSG_BUFFER_SIZE];
let req_buf = &mut req_buf[..(len as usize)];
stdin.read_exact(req_buf)?;
// Process the message
let mut res_buf = [0u8; msgs::RESPONSE_MSG_BUFFER_SIZE];
let res = match broker.handle_message(req_buf, &mut res_buf) {
Ok(len) => &res_buf[..len],
Err(e) => {
eprintln!("Error processing message for wireguard PSK broker: {e:?}");
continue;
}
};
// Write the response
stdout.write_all(&(res.len() as u64).to_le_bytes())?;
stdout.write_all(&res)?;
stdout.flush()?;
}
}

View File

@@ -0,0 +1,191 @@
use std::process::Stdio;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use tokio::net::{UnixListener, UnixStream};
use tokio::process::Command;
use tokio::sync::{mpsc, oneshot};
use tokio::task;
use anyhow::{bail, ensure, Result};
use clap::{ArgGroup, Parser};
use rosenpass_util::fd::claim_fd;
use rosenpass_wireguard_broker::api::msgs;
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
#[clap(group(
ArgGroup::new("socket")
.required(true)
.args(&["listen_path", "listen_fd", "stream_fd"]),
))]
struct Args {
/// Where in the file-system to create the unix socket this broker will be listening for
/// connections on
#[arg(long)]
listen_path: Option<String>,
/// When this broker is called from another process, the other process can open and bind the
/// unix socket to use themselves, passing it to this process. In Rust this can be achieved
/// using the [command-fds](https://docs.rs/command-fds/latest/command_fds/) crate.
#[arg(long)]
listen_fd: Option<i32>,
/// When this broker is called from another process, the other process can connect the unix socket
/// themselves, for instance using the `socketpair(2)` system call.
#[arg(long)]
stream_fd: Option<i32>,
/// The underlying broker, accepting commands through stdin and sending results through stdout.
#[arg(
last = true,
allow_hyphen_values = true,
default_value = "rosenpass-wireguard-broker-privileged"
)]
command: Vec<String>,
}
struct BrokerRequest {
reply_to: oneshot::Sender<BrokerResponse>,
request: Vec<u8>,
}
struct BrokerResponse {
response: Vec<u8>,
}
#[tokio::main]
async fn main() -> Result<()> {
env_logger::init();
let args = Args::parse();
let (proc_tx, proc_rx) = mpsc::channel(100);
// Start the inner broker handler
task::spawn(async move {
if let Err(e) = direct_broker_process(proc_rx, args.command).await {
log::error!("Error in broker command handler: {e}");
panic!("Can not proceed without underlying broker process");
}
});
// Listen for incoming requests
if let Some(path) = args.listen_path {
let sock = UnixListener::bind(path)?;
listen_for_clients(proc_tx, sock).await
} else if let Some(fd) = args.listen_fd {
let sock = std::os::unix::net::UnixListener::from(claim_fd(fd)?);
sock.set_nonblocking(true)?;
listen_for_clients(proc_tx, UnixListener::from_std(sock)?).await
} else if let Some(fd) = args.stream_fd {
let stream = std::os::unix::net::UnixStream::from(claim_fd(fd)?);
stream.set_nonblocking(true)?;
on_accept(proc_tx, UnixStream::from_std(stream)?).await
} else {
unreachable!();
}
}
async fn direct_broker_process(
mut queue: mpsc::Receiver<BrokerRequest>,
cmd: Vec<String>,
) -> Result<()> {
let proc = Command::new(&cmd[0])
.args(&cmd[1..])
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.spawn()?;
let mut stdin = proc.stdin.unwrap();
let mut stdout = proc.stdout.unwrap();
loop {
let BrokerRequest { reply_to, request } = queue.recv().await.unwrap();
stdin
.write_all(&(request.len() as u64).to_le_bytes())
.await?;
stdin.write_all(&request[..]).await?;
// Read the response length
let mut len = [0u8; 8];
stdout.read_exact(&mut len).await?;
// Parse the response length
let len = u64::from_le_bytes(len) as usize;
ensure!(
len <= msgs::RESPONSE_MSG_BUFFER_SIZE,
"Oversized buffer ({len}) in broker stdout."
);
// Read the message itself
let mut res_buf = request; // Avoid allocating memory if we don't have to
res_buf.resize(len as usize, 0);
stdout.read_exact(&mut res_buf[..len]).await?;
// Return to the unix socket connection worker
reply_to
.send(BrokerResponse { response: res_buf })
.or_else(|_| bail!("Unable to send respnse to unix socket worker."))?;
}
}
async fn listen_for_clients(queue: mpsc::Sender<BrokerRequest>, sock: UnixListener) -> Result<()> {
loop {
let (stream, _addr) = sock.accept().await?;
let queue = queue.clone();
task::spawn(async move {
if let Err(e) = on_accept(queue, stream).await {
log::error!("Error during connection processing: {e}");
}
});
}
// NOTE: If loop can ever terminate we need to join the spawned tasks
}
async fn on_accept(queue: mpsc::Sender<BrokerRequest>, mut stream: UnixStream) -> Result<()> {
let mut req_buf = Vec::new();
loop {
stream.readable().await?;
// Read the message length
let mut len = [0u8; 8];
stream.read_exact(&mut len).await?;
// Parse the message length
let len = u64::from_le_bytes(len) as usize;
ensure!(
len <= msgs::REQUEST_MSG_BUFFER_SIZE,
"Oversized buffer ({len}) in unix socket input."
);
// Read the message itself
req_buf.resize(len as usize, 0);
stream.read_exact(&mut req_buf[..len]).await?;
// Handle the message
let (reply_tx, reply_rx) = oneshot::channel();
queue
.send(BrokerRequest {
reply_to: reply_tx,
request: req_buf,
})
.await?;
// Wait for the reply
let BrokerResponse { response } = reply_rx.await.unwrap();
// Write reply back to unix socket
stream
.write_all(&(response.len() as u64).to_le_bytes())
.await?;
stream.write_all(&response[..]).await?;
stream.flush().await?;
// Reuse the same memory for the next message
req_buf = response;
}
}

View File

@@ -0,0 +1,259 @@
use anyhow::{bail, ensure};
use mio::Interest;
use std::collections::VecDeque;
use std::io::{ErrorKind, Read, Write};
use crate::{SerializedBrokerConfig, WireGuardBroker, WireguardBrokerMio};
use crate::api::client::{
BrokerClient, BrokerClientIo, BrokerClientPollResponseError, BrokerClientSetPskError,
};
use crate::api::msgs::{self, RESPONSE_MSG_BUFFER_SIZE};
#[derive(Debug)]
pub struct MioBrokerClient {
inner: BrokerClient<MioBrokerClientIo>,
}
const LEN_SIZE: usize = 8;
const RECV_BUF_SIZE: usize = RESPONSE_MSG_BUFFER_SIZE;
#[derive(Debug)]
struct MioBrokerClientIo {
socket: mio::net::UnixStream,
send_buf: VecDeque<u8>,
recv_state: RxState,
expected_state: RxState,
recv_buf: [u8; RECV_BUF_SIZE],
}
#[derive(Debug, Clone, Copy)]
enum RxState {
//Recieving size with buffer offset
RxSize(usize),
RxBuffer(usize),
}
impl MioBrokerClient {
pub fn new(socket: mio::net::UnixStream) -> Self {
let io = MioBrokerClientIo {
socket,
send_buf: VecDeque::new(),
recv_state: RxState::RxSize(0),
recv_buf: [0u8; RECV_BUF_SIZE],
expected_state: RxState::RxSize(LEN_SIZE),
};
let inner = BrokerClient::new(io);
Self { inner }
}
fn poll(&mut self) -> anyhow::Result<Option<msgs::SetPskResult>> {
self.inner.io_mut().flush()?;
// This sucks
match self.inner.poll_response() {
Ok(res) => {
return Ok(res);
}
Err(BrokerClientPollResponseError::IoError(e)) => {
return Err(e);
}
Err(BrokerClientPollResponseError::InvalidMessage) => {
bail!("Invalid message");
}
};
}
}
impl WireGuardBroker for MioBrokerClient {
type Error = anyhow::Error;
fn set_psk<'a>(&mut self, config: SerializedBrokerConfig<'a>) -> anyhow::Result<()> {
use BrokerClientSetPskError::*;
let e = self.inner.set_psk(config);
match e {
Ok(()) => Ok(()),
Err(IoError(e)) => Err(e),
Err(IfaceOutOfBounds) => bail!("Interface name size is out of bounds."),
Err(MsgError) => bail!("Error with encoding/decoding message."),
Err(BrokerError(e)) => bail!("Broker error: {:?}", e),
}
}
}
impl WireguardBrokerMio for MioBrokerClient {
type MioError = anyhow::Error;
fn register(
&mut self,
registry: &mio::Registry,
token: mio::Token,
) -> Result<(), Self::MioError> {
registry.register(
&mut self.inner.io_mut().socket,
token,
Interest::READABLE | Interest::WRITABLE,
)?;
Ok(())
}
fn process_poll(&mut self) -> Result<(), Self::MioError> {
self.poll()?;
Ok(())
}
fn unregister(&mut self, registry: &mio::Registry) -> Result<(), Self::MioError> {
registry.deregister(&mut self.inner.io_mut().socket)?;
Ok(())
}
}
impl BrokerClientIo for MioBrokerClientIo {
type SendError = anyhow::Error;
type RecvError = anyhow::Error;
fn send_msg(&mut self, buf: &[u8]) -> Result<(), Self::SendError> {
self.flush()?;
self.send_or_buffer(&(buf.len() as u64).to_le_bytes())?;
self.send_or_buffer(&buf)?;
self.flush()?;
Ok(())
}
fn recv_msg(&mut self) -> Result<Option<&[u8]>, Self::RecvError> {
loop {
match (self.recv_state, self.expected_state) {
//Stale Buffer state or recieved everything
(RxState::RxSize(x), RxState::RxSize(y))
| (RxState::RxBuffer(x), RxState::RxBuffer(y))
if x == y =>
{
match self.recv_state {
RxState::RxSize(s) => {
let len: &[u8; LEN_SIZE] = self.recv_buf[0..s].try_into().unwrap();
let len: usize = u64::from_le_bytes(*len) as usize;
ensure!(
len <= msgs::RESPONSE_MSG_BUFFER_SIZE,
"Oversized buffer ({len}) in psk buffer response."
);
self.recv_state = RxState::RxBuffer(0);
self.expected_state = RxState::RxBuffer(len);
continue;
}
RxState::RxBuffer(s) => {
self.recv_state = RxState::RxSize(0);
self.expected_state = RxState::RxSize(LEN_SIZE);
return Ok(Some(&self.recv_buf[0..s]));
}
}
}
//Recieve if x < y
(RxState::RxSize(x), RxState::RxSize(y))
| (RxState::RxBuffer(x), RxState::RxBuffer(y))
if x < y =>
{
let bytes = raw_recv(&self.socket, &mut self.recv_buf[x..y])?;
if x + bytes == y {
return Ok(Some(&self.recv_buf[0..y]));
}
//We didn't recieve everything so let's assume something went wrong
self.recv_state = RxState::RxSize(0);
self.expected_state = RxState::RxSize(LEN_SIZE);
bail!("Invalid state");
}
_ => {
//Reset states
self.recv_state = RxState::RxSize(0);
self.expected_state = RxState::RxSize(LEN_SIZE);
bail!("Invalid state");
}
};
}
}
}
impl MioBrokerClientIo {
fn flush(&mut self) -> anyhow::Result<()> {
let (fst, snd) = self.send_buf.as_slices();
let (written, res) = match raw_send(&self.socket, fst) {
Ok(w1) if w1 >= fst.len() => match raw_send(&self.socket, snd) {
Ok(w2) => (w1 + w2, Ok(())),
Err(e) => (w1, Err(e)),
},
Ok(w1) => (w1, Ok(())),
Err(e) => (0, Err(e)),
};
self.send_buf.drain(..written);
(&self.socket).try_io(|| (&self.socket).flush())?;
res
}
fn send_or_buffer(&mut self, buf: &[u8]) -> anyhow::Result<()> {
let mut off = 0;
if self.send_buf.is_empty() {
off += raw_send(&self.socket, buf)?;
}
self.send_buf.extend((&buf[off..]).iter());
Ok(())
}
}
fn raw_send(mut socket: &mio::net::UnixStream, data: &[u8]) -> anyhow::Result<usize> {
let mut off = 0;
socket.try_io(|| {
loop {
if off == data.len() {
return Ok(());
}
match socket.write(&data[off..]) {
Ok(n) => {
off += n;
}
Err(e) if e.kind() == ErrorKind::Interrupted => {
// pass retry
}
Err(e) if off > 0 || e.kind() == ErrorKind::WouldBlock => return Ok(()),
Err(e) => return Err(e),
}
}
})?;
return Ok(off);
}
fn raw_recv(mut socket: &mio::net::UnixStream, out: &mut [u8]) -> anyhow::Result<usize> {
let mut off = 0;
socket.try_io(|| {
loop {
if off == out.len() {
return Ok(());
}
match socket.read(&mut out[off..]) {
Ok(n) => {
off += n;
}
Err(e) if e.kind() == ErrorKind::Interrupted => {
// pass retry
}
Err(e) if off > 0 || e.kind() == ErrorKind::WouldBlock => return Ok(()),
Err(e) => return Err(e),
}
}
})?;
return Ok(off);
}

View File

@@ -0,0 +1,6 @@
#[cfg(feature = "enable_broker_api")]
pub mod mio_client;
#[cfg(feature = "enable_broker_api")]
pub mod netlink;
pub mod native_unix;

View File

@@ -0,0 +1,177 @@
use std::fmt::Debug;
use std::process::{Command, Stdio};
use std::thread;
use derive_builder::Builder;
use log::{debug, error};
use postcard::{from_bytes, to_allocvec};
use rosenpass_secret_memory::{Public, Secret};
use rosenpass_util::b64::b64_decode;
use rosenpass_util::{b64::B64Display, file::StoreValueB64Writer};
use crate::{SerializedBrokerConfig, WireGuardBroker, WireguardBrokerCfg, WireguardBrokerMio};
use crate::{WG_KEY_LEN, WG_PEER_LEN};
const MAX_B64_KEY_SIZE: usize = WG_KEY_LEN * 5 / 3;
const MAX_B64_PEER_ID_SIZE: usize = WG_PEER_LEN * 5 / 3;
#[derive(Debug)]
pub struct NativeUnixBroker {}
impl Default for NativeUnixBroker {
fn default() -> Self {
Self::new()
}
}
impl NativeUnixBroker {
pub fn new() -> Self {
Self {}
}
}
impl WireGuardBroker for NativeUnixBroker {
type Error = anyhow::Error;
fn set_psk(&mut self, config: SerializedBrokerConfig<'_>) -> Result<(), Self::Error> {
let config: NativeUnixBrokerConfig = config.try_into()?;
let peer_id = format!("{}", config.peer_id.fmt_b64::<MAX_B64_PEER_ID_SIZE>());
let mut child = match Command::new("wg")
.arg("set")
.arg(config.interface)
.arg("peer")
.arg(peer_id)
.arg("preshared-key")
.arg("/dev/stdin")
.stdin(Stdio::piped())
.args(config.extra_params)
.spawn()
{
Ok(x) => x,
Err(e) => {
if e.kind() == std::io::ErrorKind::NotFound {
anyhow::bail!("Could not find wg command");
} else {
return Err(anyhow::Error::new(e));
}
}
};
if let Err(e) = config
.psk
.store_b64_writer::<MAX_B64_KEY_SIZE, _>(child.stdin.take().unwrap())
{
error!("could not write psk to wg: {:?}", e);
}
thread::spawn(move || {
let status = child.wait();
if let Ok(status) = status {
if status.success() {
debug!("successfully passed psk to wg")
} else {
error!("could not pass psk to wg {:?}", status)
}
} else {
error!("wait failed: {:?}", status)
}
});
Ok(())
}
}
impl WireguardBrokerMio for NativeUnixBroker {
type MioError = anyhow::Error;
fn register(
&mut self,
_registry: &mio::Registry,
_token: mio::Token,
) -> Result<(), Self::MioError> {
Ok(())
}
fn process_poll(&mut self) -> Result<(), Self::MioError> {
Ok(())
}
fn unregister(&mut self, _registry: &mio::Registry) -> Result<(), Self::MioError> {
Ok(())
}
}
#[derive(Debug, Builder)]
#[builder(pattern = "mutable")]
pub struct NativeUnixBrokerConfigBase {
pub interface: String,
pub peer_id: Public<WG_PEER_LEN>,
#[builder(private)]
pub extra_params: Vec<u8>,
}
impl NativeUnixBrokerConfigBaseBuilder {
pub fn peer_id_b64(
&mut self,
peer_id: &str,
) -> Result<&mut Self, NativeUnixBrokerConfigBaseBuilderError> {
let mut peer_id_b64 = Public::<WG_PEER_LEN>::zero();
b64_decode(peer_id.as_bytes(), &mut peer_id_b64.value).map_err(|_e| {
NativeUnixBrokerConfigBaseBuilderError::ValidationError(
"Failed to parse peer id b64".to_string(),
)
})?;
Ok(self.peer_id(peer_id_b64))
}
pub fn extra_params_ser(
&mut self,
extra_params: &Vec<String>,
) -> Result<&mut Self, NativeUnixBrokerConfigBuilderError> {
let params = to_allocvec(extra_params).map_err(|_e| {
NativeUnixBrokerConfigBuilderError::ValidationError(
"Failed to parse extra params".to_string(),
)
})?;
Ok(self.extra_params(params))
}
}
impl WireguardBrokerCfg for NativeUnixBrokerConfigBase {
fn create_config<'a>(&'a self, psk: &'a Secret<WG_KEY_LEN>) -> SerializedBrokerConfig<'a> {
SerializedBrokerConfig {
interface: self.interface.as_bytes(),
peer_id: &self.peer_id,
psk,
additional_params: &self.extra_params,
}
}
}
#[derive(Debug, Builder)]
#[builder(pattern = "mutable")]
pub struct NativeUnixBrokerConfig<'a> {
pub interface: &'a str,
pub peer_id: &'a Public<WG_PEER_LEN>,
pub psk: &'a Secret<WG_KEY_LEN>,
pub extra_params: Vec<String>,
}
impl<'a> TryFrom<SerializedBrokerConfig<'a>> for NativeUnixBrokerConfig<'a> {
type Error = anyhow::Error;
fn try_from(value: SerializedBrokerConfig<'a>) -> Result<Self, Self::Error> {
let iface = std::str::from_utf8(value.interface)
.map_err(|_| anyhow::Error::msg("Interface UTF8 decoding error"))?;
let extra_params: Vec<String> =
from_bytes(value.additional_params).map_err(anyhow::Error::new)?;
Ok(Self {
interface: iface,
peer_id: value.peer_id,
psk: value.psk,
extra_params,
})
}
}

View File

@@ -0,0 +1,112 @@
use std::fmt::Debug;
use wireguard_uapi::linux as wg;
use crate::api::config::NetworkBrokerConfig;
use crate::api::msgs;
use crate::{SerializedBrokerConfig, WireGuardBroker};
#[derive(thiserror::Error, Debug)]
pub enum ConnectError {
#[error(transparent)]
ConnectError(#[from] wg::err::ConnectError),
}
#[derive(thiserror::Error, Debug)]
pub enum NetlinkError {
#[error(transparent)]
SetDevice(#[from] wg::err::SetDeviceError),
#[error(transparent)]
GetDevice(#[from] wg::err::GetDeviceError),
}
#[derive(thiserror::Error, Debug)]
pub enum SetPskError {
#[error("The indicated wireguard interface does not exist")]
NoSuchInterface,
#[error("The indicated peer does not exist on the wireguard interface")]
NoSuchPeer,
#[error(transparent)]
NetlinkError(#[from] NetlinkError),
}
impl From<wg::err::SetDeviceError> for SetPskError {
fn from(err: wg::err::SetDeviceError) -> Self {
NetlinkError::from(err).into()
}
}
impl From<wg::err::GetDeviceError> for SetPskError {
fn from(err: wg::err::GetDeviceError) -> Self {
NetlinkError::from(err).into()
}
}
use msgs::SetPskError as SetPskMsgsError;
use SetPskError as SetPskNetlinkError;
impl From<SetPskNetlinkError> for SetPskMsgsError {
fn from(err: SetPskError) -> Self {
match err {
SetPskNetlinkError::NoSuchPeer => SetPskMsgsError::NoSuchPeer,
_ => SetPskMsgsError::InternalError,
}
}
}
pub struct NetlinkWireGuardBroker {
sock: wg::WgSocket,
}
impl NetlinkWireGuardBroker {
pub fn new() -> Result<Self, ConnectError> {
let sock = wg::WgSocket::connect()?;
Ok(Self { sock })
}
}
impl Debug for NetlinkWireGuardBroker {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
//TODO: Add useful info in Debug
f.debug_struct("NetlinkWireGuardBroker").finish()
}
}
impl WireGuardBroker for NetlinkWireGuardBroker {
type Error = SetPskError;
fn set_psk(&mut self, config: SerializedBrokerConfig) -> Result<(), Self::Error> {
let config: NetworkBrokerConfig = config
.try_into()
.map_err(|_e| SetPskError::NoSuchInterface)?;
// Ensure that the peer exists by querying the device configuration
// TODO: Use InvalidInterfaceError
let state = self
.sock
.get_device(wg::DeviceInterface::from_name(config.iface))?;
if state
.peers
.iter()
.find(|p| &p.public_key == &config.peer_id.value)
.is_none()
{
return Err(SetPskError::NoSuchPeer);
}
// Peer update description
let mut set_peer = wireguard_uapi::set::Peer::from_public_key(&config.peer_id);
set_peer
.flags
.push(wireguard_uapi::linux::set::WgPeerF::UpdateOnly);
set_peer.preshared_key = Some(&config.psk.secret());
// Device update description
let mut set_dev = wireguard_uapi::set::Device::from_ifname(config.iface);
set_dev.peers.push(set_peer);
self.sock.set_device(set_dev)?;
Ok(())
}
}

View File

@@ -0,0 +1,40 @@
use rosenpass_secret_memory::{Public, Secret};
use std::{fmt::Debug, result::Result};
pub const WG_KEY_LEN: usize = 32;
pub const WG_PEER_LEN: usize = 32;
pub trait WireGuardBroker: Debug {
type Error;
fn set_psk(&mut self, config: SerializedBrokerConfig<'_>) -> Result<(), Self::Error>;
}
pub trait WireguardBrokerCfg: Debug {
fn create_config<'a>(&'a self, psk: &'a Secret<WG_KEY_LEN>) -> SerializedBrokerConfig<'a>;
}
#[derive(Debug)]
pub struct SerializedBrokerConfig<'a> {
pub interface: &'a [u8],
pub peer_id: &'a Public<WG_PEER_LEN>,
pub psk: &'a Secret<WG_KEY_LEN>,
pub additional_params: &'a [u8],
}
pub trait WireguardBrokerMio: WireGuardBroker {
type MioError;
/// Register interested events for mio::Registry
fn register(
&mut self,
registry: &mio::Registry,
token: mio::Token,
) -> Result<(), Self::MioError>;
/// Run after a mio::poll operation
fn process_poll(&mut self) -> Result<(), Self::MioError>;
fn unregister(&mut self, registry: &mio::Registry) -> Result<(), Self::MioError>;
}
#[cfg(feature = "enable_broker_api")]
pub mod api;
pub mod brokers;

View File

@@ -0,0 +1,140 @@
#[cfg(feature = "enable_broker_api")]
#[cfg(test)]
mod integration_tests {
use rand::Rng;
use rosenpass_secret_memory::{Public, Secret};
use rosenpass_wireguard_broker::api::msgs::{
SetPskError, REQUEST_MSG_BUFFER_SIZE, RESPONSE_MSG_BUFFER_SIZE,
};
use rosenpass_wireguard_broker::api::server::{BrokerServer, BrokerServerError};
use rosenpass_wireguard_broker::brokers::mio_client::MioBrokerClient;
use rosenpass_wireguard_broker::WG_KEY_LEN;
use rosenpass_wireguard_broker::WG_PEER_LEN;
use rosenpass_wireguard_broker::{SerializedBrokerConfig, WireGuardBroker};
use std::io::Read;
use std::sync::{Arc, Mutex};
#[derive(Default, Debug)]
struct MockServerBrokerInner {
psk: Option<Secret<WG_KEY_LEN>>,
peer_id: Option<Public<WG_PEER_LEN>>,
interface: Option<String>,
}
#[derive(Debug)]
struct MockServerBroker {
inner: Arc<Mutex<MockServerBrokerInner>>,
}
impl MockServerBroker {
fn new(inner: Arc<Mutex<MockServerBrokerInner>>) -> Self {
Self { inner }
}
}
impl WireGuardBroker for MockServerBroker {
type Error = SetPskError;
fn set_psk(&mut self, config: SerializedBrokerConfig) -> Result<(), Self::Error> {
loop {
let mut lock = self.inner.try_lock();
if let Ok(ref mut mutex) = lock {
**mutex = MockServerBrokerInner {
psk: Some(config.psk.clone()),
peer_id: Some(config.peer_id.clone()),
interface: Some(std::str::from_utf8(config.interface).unwrap().to_string()),
};
break;
}
}
Ok(())
}
}
procspawn::enable_test_support!();
#[test]
fn test_psk_exchanges() {
const TEST_RUNS: usize = 100;
use rosenpass_secret_memory::test_spawn_process_provided_policies;
test_spawn_process_provided_policies!({
let server_broker_inner = Arc::new(Mutex::new(MockServerBrokerInner::default()));
// Create a mock BrokerServer
let server_broker = MockServerBroker::new(server_broker_inner.clone());
let mut server = BrokerServer::<SetPskError, MockServerBroker>::new(server_broker);
let (client_socket, mut server_socket) = mio::net::UnixStream::pair().unwrap();
// Spawn a new thread to connect to the unix socket
let handle = std::thread::spawn(move || {
for _ in 0..TEST_RUNS {
// Wait for 8 bytes of length to come in
let mut length_buffer = [0; 8];
while let Err(_err) = server_socket.read_exact(&mut length_buffer) {}
let length = u64::from_le_bytes(length_buffer) as usize;
// Read the amount of length bytes into a buffer
let mut data_buffer = [0; REQUEST_MSG_BUFFER_SIZE];
while let Err(_err) = server_socket.read_exact(&mut data_buffer[0..length]) {}
let mut response = [0; RESPONSE_MSG_BUFFER_SIZE];
server.handle_message(&data_buffer[0..length], &mut response)?;
}
Ok::<(), BrokerServerError>(())
});
// Create a MioBrokerClient and send a psk
let mut client = MioBrokerClient::new(client_socket);
for _ in 0..TEST_RUNS {
//Create psk of random 32 bytes
let psk = Secret::random();
let peer_id = Public::random();
let interface = "test";
let config = SerializedBrokerConfig {
psk: &psk,
peer_id: &peer_id,
interface: interface.as_bytes(),
additional_params: &[],
};
client.set_psk(config).unwrap();
//Sleep for a while to allow the server to process the message
std::thread::sleep(std::time::Duration::from_millis(
rand::thread_rng().gen_range(100..500),
));
let psk = psk.secret().to_owned();
loop {
let mut lock = server_broker_inner.try_lock();
if let Ok(ref mut inner) = lock {
// Check if the psk is received by the server
let received_psk = &inner.psk;
assert_eq!(
received_psk.as_ref().map(|psk| psk.secret().to_owned()),
Some(psk)
);
let recieved_peer_id = inner.peer_id;
assert_eq!(recieved_peer_id, Some(peer_id));
let target_interface = &inner.interface;
assert_eq!(target_interface.as_deref(), Some(interface));
break;
}
}
}
handle.join().unwrap().unwrap();
});
}
}