diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 885e0a0..2771630 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -15023,6 +15023,8 @@ This section focuses on uses of coroutines.
Coroutine rule summary:
* [CP.51: Do not use capturing lambdas that are coroutines](#Rcoro-capture)
+* [CP.52: Do not hold locks or other synchronization primitives across suspension points](#Rcoro-locks)
+* [CP.53: Parameters to coroutines should not be passed by reference](#Rcoro-reference-parameters)
### CP.51: Do not use capturing lambdas that are coroutines
@@ -15082,6 +15084,84 @@ Use a function for coroutines.
Flag a lambda that is a coroutine and has a non-empty capture list.
+### CP.52: Do not hold locks or other synchronization primitives across suspension points
+
+##### Reason
+
+This pattern creates a significant risk of deadlocks. Some types of waits will allow the current thread to perform additional work until the asynchronous operation has completed. If the thread holding the lock performs work that requires the same lock then it will deadlock because it is trying to acquire a lock that it is already holding.
+
+If the coroutine completes on a different thread from the thread that acquired the lock then that is undefined behavior. Even with an explicit return to the original thread an exception might be thrown before coroutine resumes and the result will be that the lock guard is not destructed.
+
+##### Example, Bad
+
+ std::mutex g_lock;
+
+ std::future Class::do_something()
+ {
+ std::lock_guard guard(g_lock);
+ co_await something(); // DANGER: coroutine has suspended execution while holding a lock
+ co_await somethingElse();
+ }
+
+##### Example, Good
+
+ std::mutex g_lock;
+
+ std::future Class::do_something()
+ {
+ {
+ std::lock_guard guard(g_lock);
+ // modify data protected by lock
+ }
+ co_await something(); // OK: lock has been released before coroutine suspends
+ co_await somethingElse();
+ }
+
+
+##### Note
+
+This pattern is also bad for performance. When a suspension point is reached, such as co_await, execution of the current function stops and other code begins to run. It may be a long period of time before the coroutine resumes. For that entire duration the lock will be held and cannot be acquired by other threads to perform work.
+
+##### Enforcement
+
+Flag all lock guards that are not destructed before a coroutine suspends.
+
+### CP.53: Parameters to coroutines should not be passed by reference
+
+##### Reason
+
+Once a coroutine reaches the first suspension point, such as a co_await, the synchronous portion returns. After that point any parameters passed by reference are dangling. Any usage beyond that is undefined behavior which may include writing to freed memory.
+
+##### Example, Bad
+
+ std::future Class::do_something(const std::shared_ptr& input)
+ {
+ co_await something();
+
+ // DANGER: the reference to input may no longer be valid and may be freed memory
+ co_return *input + 1;
+ }
+
+##### Example, Good
+
+ std::future Class::do_something(std::shared_ptr input)
+ {
+ co_await something();
+ co_return *input + 1; // input is a copy that is still valid here
+ }
+
+##### Note
+
+This problem does not apply to reference parameters that are only accessed before the first suspension point. Subsequent changes to the function may add or move suspension points which would reintroduce this class of bug. Some types of coroutines have the suspension point before the first line of code in the coroutine executes, in which case reference parameters are always unsafe. It is safer to always pass by value because the copied parameter will live in the coroutine frame that is safe to access throughout the coroutine.
+
+##### Note
+
+The same danger applies to output parameters. [F.20: For "out" output values, prefer return values to output parameters](#Rf-out) discourages output parameters. Coroutines should avoid them entirely.
+
+##### Enforcement
+
+Flag all reference parameters to a coroutine.
+
## CP.par: Parallelism
By "parallelism" we refer to performing a task (more or less) simultaneously ("in parallel with") on many data items.