This commit is contained in:
2025-10-25 03:02:53 +03:00
commit 043225d523
3416 changed files with 681196 additions and 0 deletions

30
cppdraft/atomics/alias.md Normal file
View File

@@ -0,0 +1,30 @@
[atomics.alias]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#alias)
### 32.5.3 Type aliases [atomics.alias]
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L2857)
The type aliases atomic_intN_t, atomic_uintN_t,atomic_intptr_t, and atomic_uintptr_t are defined if and only ifintN_t, uintN_t,intptr_t, and uintptr_t are defined, respectively[.](#1.sentence-1)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L2865)
The type aliasesatomic_signed_lock_free and atomic_unsigned_lock_free name specializations of atomic whose template arguments are integral types, respectively signed and unsigned,
and whose is_always_lock_free property is true[.](#2.sentence-1)
[*Note [1](#note-1)*:
These aliases are optional in freestanding implementations ([[compliance]](compliance "16.4.2.5Freestanding implementations"))[.](#2.sentence-2)
— *end note*]
Implementations should choose for these aliases
the integral specializations of atomic for which the atomic waiting and notifying operations ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying"))
are most efficient[.](#2.sentence-3)

110
cppdraft/atomics/fences.md Normal file
View File

@@ -0,0 +1,110 @@
[atomics.fences]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#fences)
### 32.5.11 Fences [atomics.fences]
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L7033)
This subclause introduces synchronization primitives called [*fences*](#def:fences)[.](#1.sentence-1)
Fences can have
acquire semantics, release semantics, or both[.](#1.sentence-2)
A fence with acquire semantics is called
an [*acquire fence*](#def:acquire_fence)[.](#1.sentence-3)
A fence with release semantics is called a [*release
fence*](#def:release_fence)[.](#1.sentence-4)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L7039)
A release fence A synchronizes with an acquire fence B if there exist
atomic operations X and Y,
where Y is not an atomic modify-write operation ([[atomics.order]](atomics.order "32.5.4Order and consistency")),
both operating on some atomic objectM, such that A is sequenced before X, X modifiesM, Y is sequenced before B, and Y reads the value
written by X or a value written by any side effect in the hypothetical release
sequence X would head if it were a release operation[.](#2.sentence-1)
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L7049)
A release fence A synchronizes with an atomic operation B that
performs an acquire operation on an atomic object M if there exists an atomic
operation X such that A is sequenced before X, X modifies M, and B reads the value written by X or a value
written by any side effect in the hypothetical release sequence X would head if
it were a release operation[.](#3.sentence-1)
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L7057)
An atomic operation A that is a release operation on an atomic objectM synchronizes with an acquire fence B if there exists some atomic
operation X on M such that X is sequenced before B and reads the value written by A or a value written by any side effect in the
release sequence headed by A[.](#4.sentence-1)
[🔗](#lib:atomic_thread_fence)
`extern "C" constexpr void atomic_thread_fence(memory_order order) noexcept;
`
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L7070)
*Effects*: Depending on the value of order, this operation:
- [(5.1)](#5.1)
has no effects, if order == memory_order::relaxed;
- [(5.2)](#5.2)
is an acquire fence, if order == memory_order::acquire;
- [(5.3)](#5.3)
is a release fence, if order == memory_order::release;
- [(5.4)](#5.4)
is both an acquire fence and a release fence, if order == memory_order::acq_rel;
- [(5.5)](#5.5)
is a sequentially consistent acquire and release fence, if order == memory_order::seq_cst[.](#5.sentence-1)
[🔗](#lib:atomic_signal_fence)
`extern "C" constexpr void atomic_signal_fence(memory_order order) noexcept;
`
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L7092)
*Effects*: Equivalent to atomic_thread_fence(order), except that
the resulting ordering constraints are established only between a thread and a
signal handler executed in the same thread[.](#6.sentence-1)
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L7098)
[*Note [1](#note-1)*:
atomic_signal_fence can be used to specify the order in which actions
performed by the thread become visible to the signal handler[.](#7.sentence-1)
Compiler optimizations and reorderings of loads and stores are inhibited in
the same way as with atomic_thread_fence, but the hardware fence instructions
that atomic_thread_fence would have inserted are not emitted[.](#7.sentence-2)
— *end note*]

249
cppdraft/atomics/flag.md Normal file
View File

@@ -0,0 +1,249 @@
[atomics.flag]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#flag)
### 32.5.10 Flag type and operations [atomics.flag]
namespace std {struct atomic_flag {constexpr atomic_flag() noexcept;
atomic_flag(const atomic_flag&) = delete;
atomic_flag& operator=(const atomic_flag&) = delete;
atomic_flag& operator=(const atomic_flag&) volatile = delete; bool test(memory_order = memory_order::seq_cst) const volatile noexcept; constexpr bool test(memory_order = memory_order::seq_cst) const noexcept; bool test_and_set(memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool test_and_set(memory_order = memory_order::seq_cst) noexcept; void clear(memory_order = memory_order::seq_cst) volatile noexcept; constexpr void clear(memory_order = memory_order::seq_cst) noexcept; void wait(bool, memory_order = memory_order::seq_cst) const volatile noexcept; constexpr void wait(bool, memory_order = memory_order::seq_cst) const noexcept; void notify_one() volatile noexcept; constexpr void notify_one() noexcept; void notify_all() volatile noexcept; constexpr void notify_all() noexcept; };}
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6822)
The atomic_flag type provides the classic test-and-set functionality[.](#1.sentence-1)
It has two states, set and clear[.](#1.sentence-2)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6825)
Operations on an object of type atomic_flag shall be lock-free[.](#2.sentence-1)
The operations should also be address-free[.](#2.sentence-2)
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6829)
The atomic_flag type is a standard-layout struct[.](#3.sentence-1)
It has a trivial destructor[.](#3.sentence-2)
[🔗](#lib:atomic_flag,constructor)
`constexpr atomic_flag::atomic_flag() noexcept;
`
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6839)
*Effects*: Initializes *this to the clear state[.](#4.sentence-1)
[🔗](#lib:atomic_flag_test)
`bool atomic_flag_test(const volatile atomic_flag* object) noexcept;
constexpr bool atomic_flag_test(const atomic_flag* object) noexcept;
bool atomic_flag_test_explicit(const volatile atomic_flag* object,
memory_order order) noexcept;
constexpr bool atomic_flag_test_explicit(const atomic_flag* object,
memory_order order) noexcept;
bool atomic_flag::test(memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr bool atomic_flag::test(memory_order order = memory_order::seq_cst) const noexcept;
`
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6859)
For atomic_flag_test, let order be memory_order::seq_cst[.](#5.sentence-1)
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6862)
*Preconditions*: order ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst[.](#6.sentence-1)
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6869)
*Effects*: Memory is affected according to the value of order[.](#7.sentence-1)
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6873)
*Returns*: Atomically returns the value pointed to by object or this[.](#8.sentence-1)
[🔗](#lib:atomic_flag_test_and_set)
`bool atomic_flag_test_and_set(volatile atomic_flag* object) noexcept;
constexpr bool atomic_flag_test_and_set(atomic_flag* object) noexcept;
bool atomic_flag_test_and_set_explicit(volatile atomic_flag* object, memory_order order) noexcept;
constexpr bool atomic_flag_test_and_set_explicit(atomic_flag* object, memory_order order) noexcept;
bool atomic_flag::test_and_set(memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool atomic_flag::test_and_set(memory_order order = memory_order::seq_cst) noexcept;
`
[9](#9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6891)
*Effects*: Atomically sets the value pointed to by object or by this to true[.](#9.sentence-1)
Memory is affected according to the value oforder[.](#9.sentence-2)
These operations are atomic read-modify-write operations ([[intro.multithread]](intro.multithread "6.10.2Multi-threaded executions and data races"))[.](#9.sentence-3)
[10](#10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6896)
*Returns*: Atomically, the value of the object immediately before the effects[.](#10.sentence-1)
[🔗](#lib:atomic_flag_clear)
`void atomic_flag_clear(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_clear(atomic_flag* object) noexcept;
void atomic_flag_clear_explicit(volatile atomic_flag* object, memory_order order) noexcept;
constexpr void atomic_flag_clear_explicit(atomic_flag* object, memory_order order) noexcept;
void atomic_flag::clear(memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void atomic_flag::clear(memory_order order = memory_order::seq_cst) noexcept;
`
[11](#11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6914)
*Preconditions*: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst[.](#11.sentence-1)
[12](#12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6921)
*Effects*: Atomically sets the value pointed to by object or by this tofalse[.](#12.sentence-1)
Memory is affected according to the value of order[.](#12.sentence-2)
[🔗](#lib:atomic_flag_wait)
`void atomic_flag_wait(const volatile atomic_flag* object, bool old) noexcept;
constexpr void atomic_flag_wait(const atomic_flag* object, bool old) noexcept;
void atomic_flag_wait_explicit(const volatile atomic_flag* object,
bool old, memory_order order) noexcept;
constexpr void atomic_flag_wait_explicit(const atomic_flag* object,
bool old, memory_order order) noexcept;
void atomic_flag::wait(bool old, memory_order order =
memory_order::seq_cst) const volatile noexcept;
constexpr void atomic_flag::wait(bool old, memory_order order =
memory_order::seq_cst) const noexcept;
`
[13](#13)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6944)
For atomic_flag_wait,
let order be memory_order::seq_cst[.](#13.sentence-1)
Let flag be object for the non-member functions andthis for the member functions[.](#13.sentence-2)
[14](#14)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6950)
*Preconditions*: order ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst[.](#14.sentence-1)
[15](#15)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6957)
*Effects*: Repeatedly performs the following steps, in order:
- [(15.1)](#15.1)
Evaluates flag->test(order) != old[.](#15.1.sentence-1)
- [(15.2)](#15.2)
If the result of that evaluation is true, returns[.](#15.2.sentence-1)
- [(15.3)](#15.3)
Blocks until it
is unblocked by an atomic notifying operation or is unblocked spuriously[.](#15.3.sentence-1)
[16](#16)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6970)
*Remarks*: This function is an atomic waiting operation ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying"))[.](#16.sentence-1)
[🔗](#itemdecl:6)
`void atomic_flag_notify_one(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_notify_one(atomic_flag* object) noexcept;
void atomic_flag::notify_one() volatile noexcept;
constexpr void atomic_flag::notify_one() noexcept;
`
[17](#17)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6983)
*Effects*: Unblocks the execution of at least one atomic waiting operation
that is eligible to be unblocked ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying")) by this call,
if any such atomic waiting operations exist[.](#17.sentence-1)
[18](#18)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6989)
*Remarks*: This function is an atomic notifying operation ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying"))[.](#18.sentence-1)
[🔗](#itemdecl:7)
`void atomic_flag_notify_all(volatile atomic_flag* object) noexcept;
constexpr void atomic_flag_notify_all(atomic_flag* object) noexcept;
void atomic_flag::notify_all() volatile noexcept;
constexpr void atomic_flag::notify_all() noexcept;
`
[19](#19)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L7002)
*Effects*: Unblocks the execution of all atomic waiting operations
that are eligible to be unblocked ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying")) by this call[.](#19.sentence-1)
[20](#20)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L7007)
*Remarks*: This function is an atomic notifying operation ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying"))[.](#20.sentence-1)
[🔗](#itemdecl:8)
`#define ATOMIC_FLAG_INIT see below
`
[21](#21)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L7017)
*Remarks*: The macro ATOMIC_FLAG_INIT is defined in such a way that
it can be used to initialize an object of type atomic_flag to the clear state[.](#21.sentence-1)
The macro can be used in the form:atomic_flag guard = ATOMIC_FLAG_INIT;
It is unspecified whether the macro can be used
in other initialization contexts[.](#21.sentence-3)
For a complete static-duration object, that initialization shall be static[.](#21.sentence-4)

View File

@@ -0,0 +1,15 @@
[atomics.general]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#general)
### 32.5.1 General [atomics.general]
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L2388)
Subclause [[atomics]](atomics "32.5Atomic operations") describes components for fine-grained atomic access[.](#1.sentence-1)
This access is provided via operations on atomic objects[.](#1.sentence-2)

View File

@@ -0,0 +1,79 @@
[atomics.lockfree]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#lockfree)
### 32.5.5 Lock-free property [atomics.lockfree]
[🔗](#:values_of_various_ATOMIC_..._LOCK_FREE_macros)
#define ATOMIC_BOOL_LOCK_FREE *unspecified*#define ATOMIC_CHAR_LOCK_FREE *unspecified*#define ATOMIC_CHAR8_T_LOCK_FREE *unspecified*#define ATOMIC_CHAR16_T_LOCK_FREE *unspecified*#define ATOMIC_CHAR32_T_LOCK_FREE *unspecified*#define ATOMIC_WCHAR_T_LOCK_FREE *unspecified*#define ATOMIC_SHORT_LOCK_FREE *unspecified*#define ATOMIC_INT_LOCK_FREE *unspecified*#define ATOMIC_LONG_LOCK_FREE *unspecified*#define ATOMIC_LLONG_LOCK_FREE *unspecified*#define ATOMIC_POINTER_LOCK_FREE *unspecified*
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3101)
The ATOMIC_..._LOCK_FREE macros indicate the lock-free property of the
corresponding atomic types, with the signed and unsigned variants grouped
together[.](#1.sentence-1)
The properties also apply to the corresponding (partial) specializations of theatomic template[.](#1.sentence-2)
A value of 0 indicates that the types are never
lock-free[.](#1.sentence-3)
A value of 1 indicates that the types are sometimes lock-free[.](#1.sentence-4)
A
value of 2 indicates that the types are always lock-free[.](#1.sentence-5)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3109)
On a hosted implementation ([[compliance]](compliance "16.4.2.5Freestanding implementations")),
at least one signed integral specialization of the atomic template,
along with the specialization
for the corresponding unsigned type ([[basic.fundamental]](basic.fundamental "6.9.2Fundamental types")),
is always lock-free[.](#2.sentence-1)
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3116)
The functions atomic<T>::is_lock_free andatomic_is_lock_free ([[atomics.types.operations]](atomics.types.operations "32.5.8.2Operations on atomic types"))
indicate whether the object is lock-free[.](#3.sentence-1)
In any given program execution, the
result of the lock-free query
is the same for all atomic objects of the same type[.](#3.sentence-2)
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3123)
Atomic operations that are not lock-free are considered to potentially
block ([[intro.progress]](intro.progress "6.10.2.3Forward progress"))[.](#4.sentence-1)
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3127)
*Recommended practice*: Operations that are lock-free should also be address-free[.](#5.sentence-1)[294](#footnote-294 "That is, atomic operations on the same memory location via two different addresses will communicate atomically.")
The implementation of these operations should not depend on any per-process state[.](#5.sentence-2)
[*Note [1](#note-1)*:
This restriction enables communication by memory that is
mapped into a process more than once and by memory that is shared between two
processes[.](#5.sentence-3)
— *end note*]
[294)](#footnote-294)[294)](#footnoteref-294)
That is,
atomic operations on the same memory location via two different addresses will
communicate atomically[.](#footnote-294.sentence-1)

View File

@@ -0,0 +1,32 @@
[atomics.nonmembers]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#nonmembers)
### 32.5.9 Non-member functions [atomics.nonmembers]
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6779)
A non-member function template whose name matches the patternatomic_*f* or the pattern atomic_*f*_explicit invokes the member function *f*, with the value of the
first parameter as the object expression and the values of the remaining parameters
(if any) as the arguments of the member function call, in order[.](#1.sentence-1)
An argument
for a parameter of type atomic<T>::value_type* is dereferenced when
passed to the member function call[.](#1.sentence-2)
If no such member function exists, the program is ill-formed[.](#1.sentence-3)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6789)
[*Note [1](#note-1)*:
The non-member functions enable programmers to write code that can be
compiled as either C or C++, for example in a shared header file[.](#2.sentence-1)
— *end note*]

233
cppdraft/atomics/order.md Normal file
View File

@@ -0,0 +1,233 @@
[atomics.order]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#order)
### 32.5.4 Order and consistency [atomics.order]
namespace std {enum class memory_order : *unspecified* { relaxed = 0, acquire = 2, release = 3, acq_rel = 4, seq_cst = 5};}
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L2903)
The enumeration memory_order specifies the detailed regular
(non-atomic) memory synchronization order as defined in[[intro.multithread]](intro.multithread "6.10.2Multi-threaded executions and data races") and may provide for operation ordering[.](#1.sentence-1)
Its
enumerated values and their meanings are as follows:
- [(1.1)](#1.1)
memory_order::relaxed: no operation orders memory[.](#1.1.sentence-1)
- [(1.2)](#1.2)
memory_order::release, memory_order::acq_rel, andmemory_order::seq_cst: a store operation performs a release operation on the
affected memory location[.](#1.2.sentence-1)
- [(1.3)](#1.3)
memory_order::acquire, memory_order::acq_rel, andmemory_order::seq_cst: a load operation performs an acquire operation on the
affected memory location[.](#1.3.sentence-1)
[*Note [1](#note-1)*:
Atomic operations specifying memory_order::relaxed are relaxed
with respect to memory ordering[.](#1.sentence-3)
Implementations must still guarantee that any
given atomic access to a particular atomic object be indivisible with respect
to all other atomic accesses to that object[.](#1.sentence-4)
— *end note*]
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L2928)
An atomic operation A that performs a release operation on an atomic
object M synchronizes with an atomic operation B that performs
an acquire operation on M and takes its value from any side effect in the
release sequence headed by A[.](#2.sentence-1)
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L2934)
An atomic operation A on some atomic object M is[*coherence-ordered before*](#def:coherence-ordered_before "32.5.4Order and consistency[atomics.order]") another atomic operation B on M if
- [(3.1)](#3.1)
A is a modification, andB reads the value stored by A, or
- [(3.2)](#3.2)
A precedes B in the modification order of M, or
- [(3.3)](#3.3)
A and B are not
the same atomic read-modify-write operation, and
there exists an atomic modification X of M such that A reads the value stored by X andX precedes B in the modification order of M, or
- [(3.4)](#3.4)
there exists an atomic modification X of M such that A is coherence-ordered before X andX is coherence-ordered before B[.](#3.sentence-1)
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L2954)
There is a single total order S on all memory_order::seq_cst operations, including fences,
that satisfies the following constraints[.](#4.sentence-1)
First, if A and B arememory_order::seq_cst operations andA strongly happens before B,
then A precedes B in S[.](#4.sentence-2)
Second, for every pair of atomic operations A andB on an object M,
where A is coherence-ordered before B,
the following four conditions are required to be satisfied by S:
- [(4.1)](#4.1)
if A and B are bothmemory_order::seq_cst operations,
then A precedes B in S; and
- [(4.2)](#4.2)
if A is a memory_order::seq_cst operation andB happens before
a memory_order::seq_cst fence Y,
then A precedes Y in S; and
- [(4.3)](#4.3)
if a memory_order::seq_cst fence X happens before A andB is a memory_order::seq_cst operation,
then X precedes B in S; and
- [(4.4)](#4.4)
if a memory_order::seq_cst fence X happens before A andB happens before
a memory_order::seq_cst fence Y,
then X precedes Y in S[.](#4.sentence-3)
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L2985)
[*Note [2](#note-2)*:
This definition ensures that S is consistent with
the modification order of any atomic object M[.](#5.sentence-1)
It also ensures that
a memory_order::seq_cst load A of M gets its value either from the last modification of M that precedes A in S or
from some non-memory_order::seq_cst modification of M that does not happen before any modification of M that precedes A in S[.](#5.sentence-2)
— *end note*]
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L2998)
[*Note [3](#note-3)*:
We do not require that S be consistent with
“happens before” ([[intro.races]](intro.races "6.10.2.2Data races"))[.](#6.sentence-1)
This allows more efficient implementation
of memory_order::acquire and memory_order::release on some machine architectures[.](#6.sentence-2)
It can produce surprising results
when these are mixed with memory_order::seq_cst accesses[.](#6.sentence-3)
— *end note*]
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3009)
[*Note [4](#note-4)*:
memory_order::seq_cst ensures sequential consistency only
for a program that is free of data races and
uses exclusively memory_order::seq_cst atomic operations[.](#7.sentence-1)
Any use of weaker ordering will invalidate this guarantee
unless extreme care is used[.](#7.sentence-2)
In many cases, memory_order::seq_cst atomic operations are reorderable
with respect to other atomic operations performed by the same thread[.](#7.sentence-3)
— *end note*]
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3020)
Implementations should ensure that no “out-of-thin-air” values are computed that
circularly depend on their own computation[.](#8.sentence-1)
[*Note [5](#note-5)*:
For example, with x and y initially zero,// Thread 1: r1 = y.load(memory_order::relaxed);
x.store(r1, memory_order::relaxed);
// Thread 2: r2 = x.load(memory_order::relaxed);
y.store(r2, memory_order::relaxed); this recommendation discourages producing r1 == r2 == 42, since the store of 42 to y is only
possible if the store to x stores 42, which circularly depends on the
store to y storing 42[.](#8.sentence-3)
Note that without this restriction, such an
execution is possible[.](#8.sentence-4)
— *end note*]
[9](#9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3043)
[*Note [6](#note-6)*:
The recommendation similarly disallows r1 == r2 == 42 in the
following example, with x and y again initially zero:
// Thread 1: r1 = x.load(memory_order::relaxed);if (r1 == 42) y.store(42, memory_order::relaxed);
// Thread 2: r2 = y.load(memory_order::relaxed);if (r2 == 42) x.store(42, memory_order::relaxed); — *end note*]
[10](#10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3061)
Atomic read-modify-write operations shall always read the last value
(in the modification order) written before the write associated with
the read-modify-write operation[.](#10.sentence-1)
[11](#11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3066)
An [*atomic modify-write operation*](#def:modify-write_operation,atomic "32.5.4Order and consistency[atomics.order]") is
an atomic read-modify-write operation
with weaker synchronization requirements as specified in [[atomics.fences]](atomics.fences "32.5.11Fences")[.](#11.sentence-1)
[*Note [7](#note-7)*:
The intent is for atomic modify-write operations
to be implemented using mechanisms that are not ordered, in hardware,
by the implementation of acquire fences[.](#11.sentence-2)
No other semantic or hardware property
(e.g., that the mechanism is a far atomic operation) is implied[.](#11.sentence-3)
— *end note*]
[12](#12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3078)
*Recommended practice*: The implementation should make atomic stores visible to atomic loads,
and atomic loads should observe atomic stores,
within a reasonable amount of time[.](#12.sentence-1)

View File

@@ -0,0 +1,264 @@
[atomics.ref.float]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#ref.float)
### 32.5.7 Class template atomic_ref [[atomics.ref.generic]](atomics.ref.generic#atomics.ref.float)
#### 32.5.7.4 Specializations for floating-point types [atomics.ref.float]
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3884)
There are specializations of the atomic_ref class template
for all floating-point types[.](#1.sentence-1)
For each such type *floating-point-type*,
the specialization atomic_ref<*floating-point-type*> provides
additional atomic operations appropriate to floating-point types[.](#1.sentence-2)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3892)
The program is ill-formed
if is_always_lock_free is false andis_volatile_v<*floating-point-type*> is true[.](#2.sentence-1)
namespace std {template<> struct atomic_ref<*floating-point-type*> {private:*floating-point-type** ptr; // *exposition only*public:using value_type = remove_cv_t<*floating-point-type*>; using difference_type = value_type; static constexpr size_t required_alignment = *implementation-defined*; static constexpr bool is_always_lock_free = *implementation-defined*; bool is_lock_free() const noexcept; constexpr explicit atomic_ref(*floating-point-type*&); constexpr atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete; constexpr void store(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type operator=(value_type) const noexcept; constexpr value_type load(memory_order = memory_order::seq_cst) const noexcept; constexpr operator value_type() const noexcept; constexpr value_type exchange(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_weak(value_type&, value_type,
memory_order, memory_order) const noexcept; constexpr bool compare_exchange_strong(value_type&, value_type,
memory_order, memory_order) const noexcept; constexpr bool compare_exchange_weak(value_type&, value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_strong(value_type&, value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_add(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_sub(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_max(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_min(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_fmaximum(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_fminimum(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_fmaximum_num(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_fminimum_num(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_add(value_type, memory_order = memory_order::seq_cst) const noexcept; constexpr void store_sub(value_type, memory_order = memory_order::seq_cst) const noexcept; constexpr void store_max(value_type, memory_order = memory_order::seq_cst) const noexcept; constexpr void store_min(value_type, memory_order = memory_order::seq_cst) const noexcept; constexpr void store_fmaximum(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_fminimum(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_fmaximum_num(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_fminimum_num(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type operator+=(value_type) const noexcept; constexpr value_type operator-=(value_type) const noexcept; constexpr void wait(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void notify_one() const noexcept; constexpr void notify_all() const noexcept; constexpr *floating-point-type** address() const noexcept; };}
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3975)
Descriptions are provided below only for members
that differ from the primary template[.](#3.sentence-1)
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3979)
The following operations perform arithmetic computations[.](#4.sentence-1)
The correspondence among key, operator, and computation is specified
in Table [155](atomics.types.int#tab:atomic.types.int.comp "Table 155: Atomic arithmetic computations"),
except for the keysmax,min,fmaximum,fminimum,fmaximum_num, andfminimum_num,
which are specified below[.](#4.sentence-2)
[🔗](#lib:fetch_add,atomic_ref%3cfloating-point-type%3e)
`constexpr value_type fetch_key(value_type operand,
memory_order order = memory_order::seq_cst) const noexcept;
`
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4006)
*Constraints*: is_const_v<*floating-point-type*> is false[.](#5.sentence-1)
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4010)
*Effects*: Atomically replaces the value referenced by *ptr with
the result of the computation applied to the value referenced by *ptr and the given operand[.](#6.sentence-1)
Memory is affected according to the value of order[.](#6.sentence-2)
These operations are atomic read-modify-write operations ([[intro.races]](intro.races "6.10.2.2Data races"))[.](#6.sentence-3)
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4018)
*Returns*: Atomically, the value referenced by *ptr immediately before the effects[.](#7.sentence-1)
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4023)
*Remarks*: If the result is not a representable value for its type ([[expr.pre]](expr.pre "7.1Preamble")),
the result is unspecified,
but the operations otherwise have no undefined behavior[.](#8.sentence-1)
Atomic arithmetic operations on *floating-point-type* should conform to
the std::numeric_limits<value_type> traits
associated with the floating-point type ([[limits.syn]](limits.syn "17.3.3Header <limits> synopsis"))[.](#8.sentence-2)
The floating-point environment ([[cfenv]](cfenv "29.3The floating-point environment"))
for atomic arithmetic operations on *floating-point-type* may be different than the calling thread's floating-point environment[.](#8.sentence-3)
[9](#9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4035)
- [(9.1)](#9.1)
For fetch_fmaximum and fetch_fminimum,
the maximum and minimum computation is performed
as if by fmaximum and fminimum, respectively,
with *ptr and the first parameter as the arguments[.](#9.1.sentence-1)
- [(9.2)](#9.2)
For fetch_fmaximum_num and fetch_fminimum_num,
the maximum and minimum computation is performed
as if by fmaximum_num and fminimum_num, respectively,
with *ptr and the first parameter as the arguments[.](#9.2.sentence-1)
- [(9.3)](#9.3)
For fetch_max and fetch_min,
the maximum and minimum computation is performed
as if by fmaximum_num and fminimum_num, respectively,
with *ptr and the first parameter as the arguments, except that:
* [(9.3.1)](#9.3.1)
If both arguments are NaN, an unspecified NaN value is stored at *ptr[.](#9.3.1.sentence-1)
* [(9.3.2)](#9.3.2)
If exactly one argument is a NaN,
either the other argument or an unspecified NaN value is stored at *ptr;
it is unspecified which[.](#9.3.2.sentence-1)
* [(9.3.3)](#9.3.3)
If the arguments are differently signed zeros,
which of these values is stored at *ptr is unspecified[.](#9.3.3.sentence-1)
[10](#10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4065)
*Recommended practice*: The implementation of fetch_max and fetch_min should treat negative zero as smaller than positive zero[.](#10.sentence-1)
[🔗](#lib:store_add,atomic_ref%3cfloating-point-type%3e)
`constexpr void store_key(value_type operand,
memory_order order = memory_order::seq_cst) const noexcept;
`
[11](#11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4085)
*Preconditions*: order is memory_order::relaxed,memory_order::release, ormemory_order::seq_cst[.](#11.sentence-1)
[12](#12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4091)
*Effects*: Atomically replaces the value referenced by *ptr with the result of the computation applied to
the value referenced by *ptr and the given operand[.](#12.sentence-1)
Memory is affected according to the value of order[.](#12.sentence-2)
These operations are atomic modify-write operations ([[atomics.order]](atomics.order "32.5.4Order and consistency"))[.](#12.sentence-3)
[13](#13)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4099)
*Remarks*: If the result is not a representable value for its type ([[expr.pre]](expr.pre "7.1Preamble")),
the result is unspecified,
but the operations otherwise have no undefined behavior[.](#13.sentence-1)
Atomic arithmetic operations on *floating-point-type* should conform to the numeric_limits<*floating-point-type*> traits associated with the floating-point type ([[limits.syn]](limits.syn "17.3.3Header <limits> synopsis"))[.](#13.sentence-2)
The floating-point environment ([[cfenv]](cfenv "29.3The floating-point environment"))
for atomic arithmetic operations on *floating-point-type* may be different than the calling thread's floating-point environment[.](#13.sentence-3)
The arithmetic rules of floating-point atomic modify-write operations
may be different from operations on floating-point types or
atomic floating-point types[.](#13.sentence-4)
[*Note [1](#note-1)*:
Tree reductions are permitted for atomic modify-write operations[.](#13.sentence-5)
— *end note*]
[14](#14)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4117)
- [(14.1)](#14.1)
For store_fmaximum and store_fminimum,
the maximum and minimum computation is performed
as if by fmaximum and fminimum, respectively,
with *ptr and the first parameter as the arguments[.](#14.1.sentence-1)
- [(14.2)](#14.2)
For store_fmaximum_num and store_fminimum_num,
the maximum and minimum computation is performed
as if by fmaximum_num and fminimum_num, respectively,
with *ptr and the first parameter as the arguments[.](#14.2.sentence-1)
- [(14.3)](#14.3)
For store_max and store_min,
the maximum and minimum computation is performed
as if by fmaximum_num and fminimum_num, respectively,
with *ptr and the first parameter as the arguments, except that:
* [(14.3.1)](#14.3.1)
If both arguments are NaN, an unspecified NaN value is stored at *ptr[.](#14.3.1.sentence-1)
* [(14.3.2)](#14.3.2)
If exactly one argument is a NaN,
either the other argument or an unspecified NaN value is stored at *ptr,
it is unspecified which[.](#14.3.2.sentence-1)
* [(14.3.3)](#14.3.3)
If the arguments are differently signed zeros,
which of these values is stored at *ptr is unspecified[.](#14.3.3.sentence-1)
[15](#15)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4147)
*Recommended practice*: The implementation of store_max and store_min should treat negative zero as smaller than positive zero[.](#15.sentence-1)
[🔗](#lib:operator+=,atomic_ref%3cfloating-point-type%3e)
`constexpr value_type operator op=(value_type operand) const noexcept;
`
[16](#16)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4160)
*Constraints*: is_const_v<*floating-point-type*> is false[.](#16.sentence-1)
[17](#17)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4164)
*Effects*: Equivalent to:return fetch_*key*(operand) *op* operand;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,70 @@
[atomics.ref.generic.general]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#ref.generic.general)
### 32.5.7 Class template atomic_ref [[atomics.ref.generic]](atomics.ref.generic#general)
#### 32.5.7.1 General [atomics.ref.generic.general]
[🔗](#lib:atomic_ref)
namespace std {template<class T> struct atomic_ref {private: T* ptr; // *exposition only*public:using value_type = remove_cv_t<T>; static constexpr size_t required_alignment = *implementation-defined*; static constexpr bool is_always_lock_free = *implementation-defined*; bool is_lock_free() const noexcept; constexpr explicit atomic_ref(T&); constexpr atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete; constexpr void store(value_type, memory_order = memory_order::seq_cst) const noexcept; constexpr value_type operator=(value_type) const noexcept; constexpr value_type load(memory_order = memory_order::seq_cst) const noexcept; constexpr operator value_type() const noexcept; constexpr value_type exchange(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_weak(value_type&, value_type,
memory_order, memory_order) const noexcept; constexpr bool compare_exchange_strong(value_type&, value_type,
memory_order, memory_order) const noexcept; constexpr bool compare_exchange_weak(value_type&, value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_strong(value_type&, value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void wait(value_type, memory_order = memory_order::seq_cst) const noexcept; constexpr void notify_one() const noexcept; constexpr void notify_all() const noexcept; constexpr T* address() const noexcept; };}
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3240)
An atomic_ref object applies atomic operations ([[atomics.general]](atomics.general "32.5.1General")) to
the object referenced by *ptr such that,
for the lifetime ([[basic.life]](basic.life "6.8.4Lifetime")) of the atomic_ref object,
the object referenced by *ptr is an atomic object ([[intro.races]](intro.races "6.10.2.2Data races"))[.](#1.sentence-1)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3246)
The program is ill-formed if is_trivially_copyable_v<T> is false[.](#2.sentence-1)
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3249)
The lifetime ([[basic.life]](basic.life "6.8.4Lifetime")) of an object referenced by *ptr shall exceed the lifetime of all atomic_refs that reference the object[.](#3.sentence-1)
While any atomic_ref instances exist
that reference the *ptr object,
all accesses to that object shall exclusively occur
through those atomic_ref instances[.](#3.sentence-2)
No subobject of the object referenced by atomic_ref shall be concurrently referenced by any other atomic_ref object[.](#3.sentence-3)
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3259)
Atomic operations applied to an object
through a referencing atomic_ref are atomic with respect to
atomic operations applied through any other atomic_ref referencing the same object[.](#4.sentence-1)
[*Note [1](#note-1)*:
Atomic operations or the atomic_ref constructor can acquire
a shared resource, such as a lock associated with the referenced object,
to enable atomic operations to be applied to the referenced object[.](#4.sentence-2)
— *end note*]
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3270)
The program is ill-formed
if is_always_lock_free is false andis_volatile_v<T> is true[.](#5.sentence-1)

186
cppdraft/atomics/ref/int.md Normal file
View File

@@ -0,0 +1,186 @@
[atomics.ref.int]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#ref.int)
### 32.5.7 Class template atomic_ref [[atomics.ref.generic]](atomics.ref.generic#atomics.ref.int)
#### 32.5.7.3 Specializations for integral types [atomics.ref.int]
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3662)
There are specializations of the atomic_ref class template
for all integral types except cv bool[.](#1.sentence-1)
For each such type *integral-type*,
the specialization atomic_ref<*integral-type*> provides
additional atomic operations appropriate to integral types[.](#1.sentence-2)
[*Note [1](#note-1)*:
The specialization atomic_ref<bool> uses the primary template ([[atomics.ref.generic]](atomics.ref.generic "32.5.7Class template atomic_­ref"))[.](#1.sentence-3)
— *end note*]
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3674)
The program is ill-formed
if is_always_lock_free is false andis_volatile_v<*integral-type*> is true[.](#2.sentence-1)
namespace std {template<> struct atomic_ref<*integral-type*> {private:*integral-type** ptr; // *exposition only*public:using value_type = remove_cv_t<*integral-type*>; using difference_type = value_type; static constexpr size_t required_alignment = *implementation-defined*; static constexpr bool is_always_lock_free = *implementation-defined*; bool is_lock_free() const noexcept; constexpr explicit atomic_ref(*integral-type*&); constexpr atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete; constexpr void store(value_type, memory_order = memory_order::seq_cst) const noexcept; constexpr value_type operator=(value_type) const noexcept; constexpr value_type load(memory_order = memory_order::seq_cst) const noexcept; constexpr operator value_type() const noexcept; constexpr value_type exchange(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_weak(value_type&, value_type,
memory_order, memory_order) const noexcept; constexpr bool compare_exchange_strong(value_type&, value_type,
memory_order, memory_order) const noexcept; constexpr bool compare_exchange_weak(value_type&, value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_strong(value_type&, value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_add(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_sub(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_and(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_or(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_xor(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_max(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_min(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_add(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_sub(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_and(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_or(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_xor(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_max(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_min(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type operator++(int) const noexcept; constexpr value_type operator--(int) const noexcept; constexpr value_type operator++() const noexcept; constexpr value_type operator--() const noexcept; constexpr value_type operator+=(value_type) const noexcept; constexpr value_type operator-=(value_type) const noexcept; constexpr value_type operator&=(value_type) const noexcept; constexpr value_type operator|=(value_type) const noexcept; constexpr value_type operator^=(value_type) const noexcept; constexpr void wait(value_type, memory_order = memory_order::seq_cst) const noexcept; constexpr void notify_one() const noexcept; constexpr void notify_all() const noexcept; constexpr *integral-type** address() const noexcept; };}
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3761)
Descriptions are provided below only for members
that differ from the primary template[.](#3.sentence-1)
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3765)
The following operations perform arithmetic computations[.](#4.sentence-1)
The correspondence among key, operator, and computation is specified
in Table [155](atomics.types.int#tab:atomic.types.int.comp "Table 155: Atomic arithmetic computations")[.](#4.sentence-2)
[🔗](#lib:fetch_add,atomic_ref%3cintegral-type%3e)
`constexpr value_type fetch_key(value_type operand,
memory_order order = memory_order::seq_cst) const noexcept;
`
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3783)
*Constraints*: is_const_v<*integral-type*> is false[.](#5.sentence-1)
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3787)
*Effects*: Atomically replaces the value referenced by *ptr with
the result of the computation applied to the value referenced by *ptr and the given operand[.](#6.sentence-1)
Memory is affected according to the value of order[.](#6.sentence-2)
These operations are atomic read-modify-write operations ([[intro.races]](intro.races "6.10.2.2Data races"))[.](#6.sentence-3)
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3795)
*Returns*: Atomically, the value referenced by *ptr immediately before the effects[.](#7.sentence-1)
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3800)
*Remarks*: Except for fetch_max and fetch_min, for signed integer types
the result is as if the object value and parameters
were converted to their corresponding unsigned types,
the computation performed on those types, and
the result converted back to the signed type[.](#8.sentence-1)
[*Note [2](#note-2)*:
There are no undefined results arising from the computation[.](#8.sentence-2)
— *end note*]
[9](#9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3812)
For fetch_max and fetch_min, the maximum and minimum
computation is performed as if by max and min algorithms ([[alg.min.max]](alg.min.max "26.8.9Minimum and maximum")), respectively,
with the object value and the first parameter as the arguments[.](#9.sentence-1)
[🔗](#lib:store_add,atomic_ref%3cintegral-type%3e)
`constexpr void store_key(value_type operand,
memory_order order = memory_order::seq_cst) const noexcept;
`
[10](#10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3831)
*Preconditions*: order is memory_order::relaxed,memory_order::release, ormemory_order::seq_cst[.](#10.sentence-1)
[11](#11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3837)
*Effects*: Atomically replaces the value referenced by *ptr with the result of the computation applied to
the value referenced by *ptr and the given operand[.](#11.sentence-1)
Memory is affected according to the value of order[.](#11.sentence-2)
These operations are atomic modify-write operations ([[atomics.order]](atomics.order "32.5.4Order and consistency"))[.](#11.sentence-3)
[12](#12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3845)
*Remarks*: Except for store_max and store_min,
for signed integer types,
the result is as if *ptr and parameters
were converted to their corresponding unsigned types,
the computation performed on those types, and
the result converted back to the signed type[.](#12.sentence-1)
[*Note [3](#note-3)*:
There are no undefined results arising from the computation[.](#12.sentence-2)
— *end note*]
For store_max and store_min,
the maximum and minimum computation is performed
as if by max and min algorithms ([[alg.min.max]](alg.min.max "26.8.9Minimum and maximum")), respectively,
with *ptr and the first parameter as the arguments[.](#12.sentence-3)
[🔗](#lib:operator+=,atomic_ref%3cintegral-type%3e)
`constexpr value_type operator op=(value_type operand) const noexcept;
`
[13](#13)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3872)
*Constraints*: is_const_v<*integral-type*> is false[.](#13.sentence-1)
[14](#14)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3876)
*Effects*: Equivalent to:return fetch_*key*(operand) *op* operand;

View File

@@ -0,0 +1,85 @@
[atomics.ref.memop]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#ref.memop)
### 32.5.7 Class template atomic_ref [[atomics.ref.generic]](atomics.ref.generic#atomics.ref.memop)
#### 32.5.7.6 Member operators
common to integers and pointers to objects [atomics.ref.memop]
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4373)
Let *referred-type* be *pointer-type* for the specializations in [[atomics.ref.pointer]](atomics.ref.pointer "32.5.7.5Specialization for pointers") and
be *integral-type* for the specializations in [[atomics.ref.int]](atomics.ref.int "32.5.7.3Specializations for integral types")[.](#1.sentence-1)
[🔗](#lib:operator++,atomic_ref%3cpointer-type%3e)
`constexpr value_type operator++(int) const noexcept;
`
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4388)
*Constraints*: is_const_v<*referred-type*> is false[.](#2.sentence-1)
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4392)
*Effects*: Equivalent to: return fetch_add(1);
[🔗](#lib:operator--,atomic_ref%3cpointer-type%3e)
`constexpr value_type operator--(int) const noexcept;
`
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4404)
*Constraints*: is_const_v<*referred-type*> is false[.](#4.sentence-1)
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4408)
*Effects*: Equivalent to: return fetch_sub(1);
[🔗](#lib:operator++,atomic_ref%3cpointer-type%3e_)
`constexpr value_type operator++() const noexcept;
`
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4420)
*Constraints*: is_const_v<*referred-type*> is false[.](#6.sentence-1)
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4424)
*Effects*: Equivalent to: return fetch_add(1) + 1;
[🔗](#lib:operator--,atomic_ref%3cpointer-type%3e_)
`constexpr value_type operator--() const noexcept;
`
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4436)
*Constraints*: is_const_v<*referred-type*> is false[.](#8.sentence-1)
[9](#9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4440)
*Effects*: Equivalent to: return fetch_sub(1) - 1;

392
cppdraft/atomics/ref/ops.md Normal file
View File

@@ -0,0 +1,392 @@
[atomics.ref.ops]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#ref.ops)
### 32.5.7 Class template atomic_ref [[atomics.ref.generic]](atomics.ref.generic#atomics.ref.ops)
#### 32.5.7.2 Operations [atomics.ref.ops]
[🔗](#lib:required_alignment,atomic_ref)
`static constexpr size_t required_alignment;
`
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3286)
The alignment required for an object to be referenced by an atomic reference,
which is at least alignof(T)[.](#1.sentence-1)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3290)
[*Note [1](#note-1)*:
Hardware could require an object
referenced by an atomic_ref to have stricter alignment ([[basic.align]](basic.align "6.8.3Alignment"))
than other objects of type T[.](#2.sentence-1)
Further, whether operations on an atomic_ref are lock-free could depend on the alignment of the referenced object[.](#2.sentence-2)
For example, lock-free operations on std::complex<double> could be supported only if aligned to 2*alignof(double)[.](#2.sentence-3)
— *end note*]
[🔗](#lib:is_always_lock_free,atomic_ref)
`static constexpr bool is_always_lock_free;
`
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3312)
The static data member is_always_lock_free is true if the atomic_ref type's operations are always lock-free,
and false otherwise[.](#3.sentence-1)
[🔗](#lib:is_lock_free,atomic_ref)
`bool is_lock_free() const noexcept;
`
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3327)
*Returns*: true if operations on all objects of the type atomic_ref<T> are lock-free,false otherwise[.](#4.sentence-1)
[🔗](#lib:atomic_ref,constructor)
`constexpr atomic_ref(T& obj);
`
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3343)
*Preconditions*: The referenced object is aligned to required_alignment[.](#5.sentence-1)
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3347)
*Postconditions*: *this references obj[.](#6.sentence-1)
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3351)
*Throws*: Nothing[.](#7.sentence-1)
[🔗](#lib:atomic_ref,constructor_)
`constexpr atomic_ref(const atomic_ref& ref) noexcept;
`
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3365)
*Postconditions*: *this references the object referenced by ref[.](#8.sentence-1)
[🔗](#lib:store,atomic_ref)
`constexpr void store(value_type desired,
memory_order order = memory_order::seq_cst) const noexcept;
`
[9](#9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3380)
*Constraints*: is_const_v<T> is false[.](#9.sentence-1)
[10](#10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3384)
*Preconditions*: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst[.](#10.sentence-1)
[11](#11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3391)
*Effects*: Atomically replaces the value referenced by *ptr with the value of desired[.](#11.sentence-1)
Memory is affected according to the value of order[.](#11.sentence-2)
[🔗](#lib:operator=,atomic_ref)
`constexpr value_type operator=(value_type desired) const noexcept;
`
[12](#12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3407)
*Constraints*: is_const_v<T> is false[.](#12.sentence-1)
[13](#13)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3411)
*Effects*: Equivalent to:store(desired);return desired;
[🔗](#lib:load,atomic_ref)
`constexpr value_type load(memory_order order = memory_order::seq_cst) const noexcept;
`
[14](#14)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3429)
*Preconditions*: order ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst[.](#14.sentence-1)
[15](#15)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3436)
*Effects*: Memory is affected according to the value of order[.](#15.sentence-1)
[16](#16)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3440)
*Returns*: Atomically returns the value referenced by *ptr[.](#16.sentence-1)
[🔗](#lib:operator_type,atomic_ref)
`constexpr operator value_type() const noexcept;
`
[17](#17)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3454)
*Effects*: Equivalent to: return load();
[🔗](#lib:exchange,atomic_ref)
`constexpr value_type exchange(value_type desired,
memory_order order = memory_order::seq_cst) const noexcept;
`
[18](#18)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3469)
*Constraints*: is_const_v<T> is false[.](#18.sentence-1)
[19](#19)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3473)
*Effects*: Atomically replaces the value referenced by *ptr with desired[.](#19.sentence-1)
Memory is affected according to the value of order[.](#19.sentence-2)
This operation is an atomic read-modify-write operation ([[intro.multithread]](intro.multithread "6.10.2Multi-threaded executions and data races"))[.](#19.sentence-3)
[20](#20)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3480)
*Returns*: Atomically returns the value referenced by *ptr immediately before the effects[.](#20.sentence-1)
[🔗](#lib:compare_exchange_weak,atomic_ref)
`constexpr bool compare_exchange_weak(value_type& expected, value_type desired,
memory_order success, memory_order failure) const noexcept;
constexpr bool compare_exchange_strong(value_type& expected, value_type desired,
memory_order success, memory_order failure) const noexcept;
constexpr bool compare_exchange_weak(value_type& expected, value_type desired,
memory_order order = memory_order::seq_cst) const noexcept;
constexpr bool compare_exchange_strong(value_type& expected, value_type desired,
memory_order order = memory_order::seq_cst) const noexcept;
`
[21](#21)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3509)
*Constraints*: is_const_v<T> is false[.](#21.sentence-1)
[22](#22)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3513)
*Preconditions*: failure ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst[.](#22.sentence-1)
[23](#23)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3520)
*Effects*: Retrieves the value in expected[.](#23.sentence-1)
It then atomically compares the value representation of
the value referenced by *ptr for equality
with that previously retrieved from expected,
and if true, replaces the value referenced by *ptr with that in desired[.](#23.sentence-2)
If and only if the comparison is true,
memory is affected according to the value of success, and
if the comparison is false,
memory is affected according to the value of failure[.](#23.sentence-3)
When only one memory_order argument is supplied,
the value of success is order, and
the value of failure is order except that a value of memory_order::acq_rel shall be replaced by
the value memory_order::acquire and
a value of memory_order::release shall be replaced by
the value memory_order::relaxed[.](#23.sentence-4)
If and only if the comparison is false then,
after the atomic operation,
the value in expected is replaced by
the value read from the value referenced by *ptr during the atomic comparison[.](#23.sentence-5)
If the operation returns true,
these operations are atomic read-modify-write operations ([[intro.races]](intro.races "6.10.2.2Data races"))
on the value referenced by *ptr[.](#23.sentence-6)
Otherwise, these operations are atomic load operations on that memory[.](#23.sentence-7)
[24](#24)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3549)
*Returns*: The result of the comparison[.](#24.sentence-1)
[25](#25)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3553)
*Remarks*: A weak compare-and-exchange operation may fail spuriously[.](#25.sentence-1)
That is, even when the contents of memory referred to
by expected and ptr are equal,
it may return false and
store back to expected the same memory contents
that were originally there[.](#25.sentence-2)
[*Note [2](#note-2)*:
This spurious failure enables implementation of compare-and-exchange
on a broader class of machines, e.g., load-locked store-conditional machines[.](#25.sentence-3)
A consequence of spurious failure is
that nearly all uses of weak compare-and-exchange will be in a loop[.](#25.sentence-4)
When a compare-and-exchange is in a loop,
the weak version will yield better performance on some platforms[.](#25.sentence-5)
When a weak compare-and-exchange would require a loop and
a strong one would not, the strong one is preferable[.](#25.sentence-6)
— *end note*]
[🔗](#lib:wait,atomic_ref%3cT%3e)
`constexpr void wait(value_type old, memory_order order = memory_order::seq_cst) const noexcept;
`
[26](#26)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3579)
*Preconditions*: order ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst[.](#26.sentence-1)
[27](#27)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3586)
*Effects*: Repeatedly performs the following steps, in order:
- [(27.1)](#27.1)
Evaluates load(order) and
compares its value representation for equality against that of old[.](#27.1.sentence-1)
- [(27.2)](#27.2)
If they compare unequal, returns[.](#27.2.sentence-1)
- [(27.3)](#27.3)
Blocks until it
is unblocked by an atomic notifying operation or is unblocked spuriously[.](#27.3.sentence-1)
[28](#28)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3600)
*Remarks*: This function is an atomic waiting operation ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying"))
on atomic object *ptr[.](#28.sentence-1)
[🔗](#lib:notify_one,atomic_ref%3cT%3e)
`constexpr void notify_one() const noexcept;
`
[29](#29)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3612)
*Constraints*: is_const_v<T> is false[.](#29.sentence-1)
[30](#30)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3616)
*Effects*: Unblocks the execution of at least one atomic waiting operation on *ptr that is eligible to be unblocked ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying")) by this call,
if any such atomic waiting operations exist[.](#30.sentence-1)
[31](#31)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3622)
*Remarks*: This function is an atomic notifying operation ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying"))
on atomic object *ptr[.](#31.sentence-1)
[🔗](#lib:notify_all,atomic_ref%3cT%3e)
`constexpr void notify_all() const noexcept;
`
[32](#32)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3634)
*Constraints*: is_const_v<T> is false[.](#32.sentence-1)
[33](#33)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3638)
*Effects*: Unblocks the execution of all atomic waiting operations on *ptr that are eligible to be unblocked ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying")) by this call[.](#33.sentence-1)
[34](#34)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3643)
*Remarks*: This function is an atomic notifying operation ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying"))
on atomic object *ptr[.](#34.sentence-1)
[🔗](#lib:address,atomic_ref%3cT%3e)
`constexpr T* address() const noexcept;
`
[35](#35)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3655)
*Returns*: ptr[.](#35.sentence-1)

View File

@@ -0,0 +1,183 @@
[atomics.ref.pointer]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#ref.pointer)
### 32.5.7 Class template atomic_ref [[atomics.ref.generic]](atomics.ref.generic#atomics.ref.pointer)
#### 32.5.7.5 Specialization for pointers [atomics.ref.pointer]
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4173)
There are specializations of the atomic_ref class template
for all pointer-to-object types[.](#1.sentence-1)
For each such type *pointer-type*,
the specialization atomic_ref<*pointer-type*> provides
additional atomic operations appropriate to pointer types[.](#1.sentence-2)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4180)
The program is ill-formed
if is_always_lock_free is false andis_volatile_v<*pointer-type*> is true[.](#2.sentence-1)
namespace std {template<> struct atomic_ref<*pointer-type*> {private:*pointer-type** ptr; // *exposition only*public:using value_type = remove_cv_t<*pointer-type*>; using difference_type = ptrdiff_t; static constexpr size_t required_alignment = *implementation-defined*; static constexpr bool is_always_lock_free = *implementation-defined*; bool is_lock_free() const noexcept; constexpr explicit atomic_ref(*pointer-type*&); constexpr atomic_ref(const atomic_ref&) noexcept;
atomic_ref& operator=(const atomic_ref&) = delete; constexpr void store(value_type, memory_order = memory_order::seq_cst) const noexcept; constexpr value_type operator=(value_type) const noexcept; constexpr value_type load(memory_order = memory_order::seq_cst) const noexcept; constexpr operator value_type() const noexcept; constexpr value_type exchange(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_weak(value_type&, value_type,
memory_order, memory_order) const noexcept; constexpr bool compare_exchange_strong(value_type&, value_type,
memory_order, memory_order) const noexcept; constexpr bool compare_exchange_weak(value_type&, value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr bool compare_exchange_strong(value_type&, value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_add(difference_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_sub(difference_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_max(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type fetch_min(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_add(difference_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_sub(difference_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_max(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr void store_min(value_type,
memory_order = memory_order::seq_cst) const noexcept; constexpr value_type operator++(int) const noexcept; constexpr value_type operator--(int) const noexcept; constexpr value_type operator++() const noexcept; constexpr value_type operator--() const noexcept; constexpr value_type operator+=(difference_type) const noexcept; constexpr value_type operator-=(difference_type) const noexcept; constexpr void wait(value_type, memory_order = memory_order::seq_cst) const noexcept; constexpr void notify_one() const noexcept; constexpr void notify_all() const noexcept; constexpr *pointer-type** address() const noexcept; };}
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4252)
Descriptions are provided below only for members
that differ from the primary template[.](#3.sentence-1)
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4256)
The following operations perform arithmetic computations[.](#4.sentence-1)
The correspondence among key, operator, and computation is specified
in Table [156](atomics.types.pointer#tab:atomic.types.pointer.comp "Table 156: Atomic pointer computations")[.](#4.sentence-2)
[🔗](#lib:fetch_add,atomic_ref%3cpointer-type%3e)
`constexpr value_type fetch_key(difference_type operand,
memory_order order = memory_order::seq_cst) const noexcept;
`
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4271)
*Constraints*: is_const_v<*pointer-type*> is false[.](#5.sentence-1)
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4275)
*Mandates*: remove_pointer_t<*pointer-type*> is a complete object type[.](#6.sentence-1)
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4279)
*Effects*: Atomically replaces the value referenced by *ptr with
the result of the computation applied to the value referenced by *ptr and the given operand[.](#7.sentence-1)
Memory is affected according to the value of order[.](#7.sentence-2)
These operations are atomic read-modify-write operations ([[intro.races]](intro.races "6.10.2.2Data races"))[.](#7.sentence-3)
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4287)
*Returns*: Atomically, the value referenced by *ptr immediately before the effects[.](#8.sentence-1)
[9](#9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4292)
*Remarks*: The result may be an undefined address,
but the operations otherwise have no undefined behavior[.](#9.sentence-1)
[10](#10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4297)
For fetch_max and fetch_min, the maximum and minimum
computation is performed as if by max and min algorithms ([[alg.min.max]](alg.min.max "26.8.9Minimum and maximum")), respectively, with the object value and the first
parameter as the arguments[.](#10.sentence-1)
[*Note [1](#note-1)*:
If the pointers point to different complete objects (or subobjects thereof),
the < operator does not establish a strict weak ordering
(Table [29](utility.arg.requirements#tab:cpp17.lessthancomparable "Table 29: Cpp17LessThanComparable requirements"), [[expr.rel]](expr.rel "7.6.9Relational operators"))[.](#10.sentence-2)
— *end note*]
[🔗](#lib:store_add,atomic_ref%3cpointer-type%3e)
`constexpr void store_key(see above operand,
memory_order order = memory_order::seq_cst) const noexcept;
`
[11](#11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4320)
*Mandates*: T is a complete object type[.](#11.sentence-1)
[12](#12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4324)
*Preconditions*: order is memory_order::relaxed,memory_order::release, ormemory_order::seq_cst[.](#12.sentence-1)
[13](#13)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4330)
*Effects*: Atomically replaces the value referenced by *ptr with the result of the computation applied to
the value referenced by *ptr and the given operand[.](#13.sentence-1)
Memory is affected according to the value of order[.](#13.sentence-2)
These operations are atomic modify-write operations ([[atomics.order]](atomics.order "32.5.4Order and consistency"))[.](#13.sentence-3)
[14](#14)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4338)
*Remarks*: The result may be an undefined address,
but the operations otherwise have no undefined behavior[.](#14.sentence-1)
For store_max and store_min,
the maximum and minimum computation is performed
as if by max and min algorithms ([[alg.min.max]](alg.min.max "26.8.9Minimum and maximum")), respectively,
with *ptr and the first parameter as the arguments[.](#14.sentence-2)
[*Note [2](#note-2)*:
If the pointers point to different complete objects (or subobjects thereof),
the < operator does not establish
a strict weak ordering (Table [29](utility.arg.requirements#tab:cpp17.lessthancomparable "Table 29: Cpp17LessThanComparable requirements"), [[expr.rel]](expr.rel "7.6.9Relational operators"))[.](#14.sentence-3)
— *end note*]
[🔗](#lib:operator+=,atomic_ref%3cpointer-type%3e)
`constexpr value_type operator op=(difference_type operand) const noexcept;
`
[15](#15)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4360)
*Constraints*: is_const_v<*pointer-type*> is false[.](#15.sentence-1)
[16](#16)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4364)
*Effects*: Equivalent to:return fetch_*key*(operand) *op* operand;

39
cppdraft/atomics/syn.md Normal file

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,323 @@
[atomics.types.float]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#types.float)
### 32.5.8 Class template atomic [[atomics.types.generic]](atomics.types.generic#atomics.types.float)
#### 32.5.8.4 Specializations for floating-point types [atomics.types.float]
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5371)
There are specializations of the atomic class template for all cv-unqualified floating-point types[.](#1.sentence-1)
For each such type *floating-point-type*,
the specialization atomic<*floating-point-type*> provides additional atomic operations appropriate to floating-point types[.](#1.sentence-2)
namespace std {template<> struct atomic<*floating-point-type*> {using value_type = *floating-point-type*; using difference_type = value_type; static constexpr bool is_always_lock_free = *implementation-defined*; bool is_lock_free() const volatile noexcept; bool is_lock_free() const noexcept; constexpr atomic() noexcept; constexpr atomic(*floating-point-type*) noexcept;
atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
atomic& operator=(const atomic&) volatile = delete; void store(*floating-point-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store(*floating-point-type*, memory_order = memory_order::seq_cst) noexcept; *floating-point-type* operator=(*floating-point-type*) volatile noexcept; constexpr *floating-point-type* operator=(*floating-point-type*) noexcept; *floating-point-type* load(memory_order = memory_order::seq_cst) volatile noexcept; constexpr *floating-point-type* load(memory_order = memory_order::seq_cst) noexcept; operator *floating-point-type*() volatile noexcept; constexpr operator *floating-point-type*() noexcept; *floating-point-type* exchange(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *floating-point-type* exchange(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_weak(*floating-point-type*&, *floating-point-type*,
memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_weak(*floating-point-type*&, *floating-point-type*,
memory_order, memory_order) noexcept; bool compare_exchange_strong(*floating-point-type*&, *floating-point-type*,
memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_strong(*floating-point-type*&, *floating-point-type*,
memory_order, memory_order) noexcept; bool compare_exchange_weak(*floating-point-type*&, *floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_weak(*floating-point-type*&, *floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_strong(*floating-point-type*&, *floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_strong(*floating-point-type*&, *floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; *floating-point-type* fetch_add(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *floating-point-type* fetch_add(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; *floating-point-type* fetch_sub(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *floating-point-type* fetch_sub(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; *floating-point-type* fetch_max(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *floating-point-type* fetch_max(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; *floating-point-type* fetch_min(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *floating-poin-type*t fetch_min(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; *floating-point-type* fetch_fmaximum(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *floating-point-type* fetch_fmaximum(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; *floating-point-type* fetch_fminimum(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *floating-point-type* fetch_fminimum(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; *floating-point-type* fetch_fmaximum_num(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *floating-point-type* fetch_fmaximum_num(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; *floating-point-type* fetch_fminimum_num(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *floating-point-type* fetch_fminimum_num(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; void store_add(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_add(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; void store_sub(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_sub(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; void store_max(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_max(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; void store_min(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_min(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; void store_fmaximum(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_fmaximum(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; void store_fminimum(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_fminimum(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; void store_fmaximum_num(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_fmaximum_num(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; void store_fminimum_num(*floating-point-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_fminimum_num(*floating-point-type*,
memory_order = memory_order::seq_cst) noexcept; *floating-point-type* operator+=(*floating-point-type*) volatile noexcept; constexpr *floating-point-type* operator+=(*floating-point-type*) noexcept; *floating-point-type* operator-=(*floating-point-type*) volatile noexcept; constexpr *floating-point-type* operator-=(*floating-point-type*) noexcept; void wait(*floating-point-type*, memory_order = memory_order::seq_cst) const volatile noexcept; constexpr void wait(*floating-point-type*,
memory_order = memory_order::seq_cst) const noexcept; void notify_one() volatile noexcept; constexpr void notify_one() noexcept; void notify_all() volatile noexcept; constexpr void notify_all() noexcept; };}
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5506)
The atomic floating-point specializations
are standard-layout structs[.](#2.sentence-1)
They each have
a trivial destructor[.](#2.sentence-2)
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5512)
Descriptions are provided below only for members that differ from the primary template[.](#3.sentence-1)
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5515)
The following operations perform arithmetic addition and subtraction computations[.](#4.sentence-1)
The correspondence among key, operator, and computation is specified
in Table [155](atomics.types.int#tab:atomic.types.int.comp "Table 155: Atomic arithmetic computations"),
except for the keysmax,min,fmaximum,fminimum,fmaximum_num, andfminimum_num,
which are specified below[.](#4.sentence-2)
[🔗](#lib:atomic_fetch_add)
`floating-point-type fetch_key(floating-point-type operand,
memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr floating-point-type fetch_key(floating-point-type operand,
memory_order order = memory_order::seq_cst) noexcept;
`
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5548)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#5.sentence-1)
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5553)
*Effects*: Atomically replaces the value pointed to by this with the result of the computation applied to the value pointed
to by this and the given operand[.](#6.sentence-1)
Memory is affected according to the value of order[.](#6.sentence-2)
These operations are atomic read-modify-write operations ([[intro.multithread]](intro.multithread "6.10.2Multi-threaded executions and data races"))[.](#6.sentence-3)
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5561)
*Returns*: Atomically, the value pointed to by this immediately before the effects[.](#7.sentence-1)
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5565)
*Remarks*: If the result is not a representable value for its type ([[expr.pre]](expr.pre "7.1Preamble"))
the result is unspecified, but the operations otherwise have no undefined
behavior[.](#8.sentence-1)
Atomic arithmetic operations on *floating-point-type* should conform to the std::numeric_limits<*floating-point-type*> traits associated with the floating-point type ([[limits.syn]](limits.syn "17.3.3Header <limits> synopsis"))[.](#8.sentence-2)
The floating-point environment ([[cfenv]](cfenv "29.3The floating-point environment")) for atomic arithmetic operations
on *floating-point-type* may be different than the
calling thread's floating-point environment[.](#8.sentence-3)
[9](#9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5576)
- [(9.1)](#9.1)
For fetch_fmaximum and fetch_fminimum,
the maximum and minimum computation is performed
as if by fmaximum and fminimum, respectively,
with the value pointed to by this and the first parameter
as the arguments[.](#9.1.sentence-1)
- [(9.2)](#9.2)
For fetch_fmaximum_num and fetch_fminimum_num,
the maximum and minimum computation is performed
as if by fmaximum_num and fminimum_num, respectively,
with the value pointed to by this and the first parameter
as the arguments[.](#9.2.sentence-1)
- [(9.3)](#9.3)
For fetch_max and fetch_min,
the maximum and minimum computation is performed
as if by fmaximum_num and fminimum_num, respectively,
with the value pointed to by this and the first parameter
as the arguments, except that:
* [(9.3.1)](#9.3.1)
If both arguments are NaN,
an unspecified NaN value replaces the value pointed to by this[.](#9.3.1.sentence-1)
* [(9.3.2)](#9.3.2)
If exactly one argument is a NaN,
either the other argument or an unspecified NaN value
replaces the value pointed to by this; it is unspecified which[.](#9.3.2.sentence-1)
* [(9.3.3)](#9.3.3)
If the arguments are differently signed zeros,
which of these values replaces the value pointed to by this is unspecified[.](#9.3.3.sentence-1)
[10](#10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5610)
*Recommended practice*: The implementation of fetch_max and fetch_min should treat negative zero as smaller than positive zero[.](#10.sentence-1)
[🔗](#lib:store_max,atomic%3cfloating-point-type%3e)
`void store_key(floating-point-type operand,
memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store_key(floating-point-type operand,
memory_order order = memory_order::seq_cst) noexcept;
`
[11](#11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5630)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#11.sentence-1)
[12](#12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5635)
*Preconditions*: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst[.](#12.sentence-1)
[13](#13)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5642)
*Effects*: Atomically replaces the value pointed to by this with the result of the computation applied to
the value pointed to by this and the given operand[.](#13.sentence-1)
Memory is affected according to the value of order[.](#13.sentence-2)
These operations are atomic modify-write operations ([[atomics.order]](atomics.order "32.5.4Order and consistency"))[.](#13.sentence-3)
[14](#14)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5650)
*Remarks*: If the result is not a representable value for its type ([[expr.pre]](expr.pre "7.1Preamble"))
the result is unspecified,
but the operations otherwise have no undefined behavior[.](#14.sentence-1)
Atomic arithmetic operations on *floating-point-type* should conform to the numeric_limits<*floating-point-type*> traits associated with the floating-point type ([[limits.syn]](limits.syn "17.3.3Header <limits> synopsis"))[.](#14.sentence-2)
The floating-point environment ([[cfenv]](cfenv "29.3The floating-point environment")) for
atomic arithmetic operations on *floating-point-type* may be different than the calling thread's floating-point environment[.](#14.sentence-3)
The arithmetic rules of floating-point atomic modify-write operations
may be different from operations on
floating-point types or atomic floating-point types[.](#14.sentence-4)
[*Note [1](#note-1)*:
Tree reductions are permitted for atomic modify-write operations[.](#14.sentence-5)
— *end note*]
[15](#15)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5668)
- [(15.1)](#15.1)
For store_fmaximum and store_fminimum,
the maximum and minimum computation is performed
as if by fmaximum and fminimum, respectively,
with the value pointed to by this and
the first parameter as the arguments[.](#15.1.sentence-1)
- [(15.2)](#15.2)
For store_fmaximum_num and store_fminimum_num,
the maximum and minimum computation is performed
as if by fmaximum_num and fminimum_num, respectively,
with the value pointed to by this and
the first parameter as the arguments[.](#15.2.sentence-1)
- [(15.3)](#15.3)
For store_max and store_min,
the maximum and minimum computation is performed
as if by fmaximum_num and fminimum_num, respectively,
with the value pointed to by this and
the first parameter as the arguments, except that:
* [(15.3.1)](#15.3.1)
If both arguments are NaN,
an unspecified NaN value replaces the value pointed to by this[.](#15.3.1.sentence-1)
* [(15.3.2)](#15.3.2)
If exactly one argument is a NaN,
either the other argument or an unspecified NaN value replaces
the value pointed to by this;
it is unspecified which[.](#15.3.2.sentence-1)
* [(15.3.3)](#15.3.3)
If the arguments are differently signed zeros,
which of these values replaces the value pointed to by this is unspecified[.](#15.3.3.sentence-1)
[16](#16)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5703)
*Recommended practice*: The implementation of store_max and store_min should treat negative zero as smaller than positive zero[.](#16.sentence-1)
[🔗](#lib:operator+=,atomic%3cT*%3e)
`floating-point-type operator op=(floating-point-type operand) volatile noexcept;
constexpr floating-point-type operator op=(floating-point-type operand) noexcept;
`
[17](#17)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5719)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#17.sentence-1)
[18](#18)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5724)
*Effects*: Equivalent to: return fetch_*key*(operand) *op* operand;
[19](#19)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5728)
*Remarks*: If the result is not a representable value for its type ([[expr.pre]](expr.pre "7.1Preamble"))
the result is unspecified, but the operations otherwise have no undefined
behavior[.](#19.sentence-1)
Atomic arithmetic operations on *floating-point-type* should conform to the std::numeric_limits<*floating-point-type*> traits associated with the floating-point type ([[limits.syn]](limits.syn "17.3.3Header <limits> synopsis"))[.](#19.sentence-2)
The floating-point environment ([[cfenv]](cfenv "29.3The floating-point environment")) for atomic arithmetic operations
on *floating-point-type* may be different than the
calling thread's floating-point environment[.](#19.sentence-3)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,82 @@
[atomics.types.generic.general]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#types.generic.general)
### 32.5.8 Class template atomic [[atomics.types.generic]](atomics.types.generic#general)
#### 32.5.8.1 General [atomics.types.generic.general]
[🔗](#lib:atomic)
namespace std {template<class T> struct atomic {using value_type = T; static constexpr bool is_always_lock_free = *implementation-defined*; bool is_lock_free() const volatile noexcept; bool is_lock_free() const noexcept; // [[atomics.types.operations]](atomics.types.operations "32.5.8.2Operations on atomic types"), operations on atomic typesconstexpr atomic() noexcept(is_nothrow_default_constructible_v<T>); constexpr atomic(T) noexcept;
atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
atomic& operator=(const atomic&) volatile = delete;
T load(memory_order = memory_order::seq_cst) const volatile noexcept; constexpr T load(memory_order = memory_order::seq_cst) const noexcept; operator T() const volatile noexcept; constexpr operator T() const noexcept; void store(T, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store(T, memory_order = memory_order::seq_cst) noexcept;
T operator=(T) volatile noexcept; constexpr T operator=(T) noexcept;
T exchange(T, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T exchange(T, memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_weak(T&, T, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_weak(T&, T, memory_order, memory_order) noexcept; bool compare_exchange_strong(T&, T, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_strong(T&, T, memory_order, memory_order) noexcept; bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_weak(T&, T, memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_strong(T&, T, memory_order = memory_order::seq_cst) noexcept; void wait(T, memory_order = memory_order::seq_cst) const volatile noexcept; constexpr void wait(T, memory_order = memory_order::seq_cst) const noexcept; void notify_one() volatile noexcept; constexpr void notify_one() noexcept; void notify_all() volatile noexcept; constexpr void notify_all() noexcept; };}
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4498)
The template argument for T shall meet the[*Cpp17CopyConstructible*](utility.arg.requirements#:Cpp17CopyConstructible "16.4.4.2Template argument requirements[utility.arg.requirements]") and [*Cpp17CopyAssignable*](utility.arg.requirements#:Cpp17CopyAssignable "16.4.4.2Template argument requirements[utility.arg.requirements]") requirements[.](#1.sentence-1)
The program is ill-formed if any of
- [(1.1)](#1.1)
is_trivially_copyable_v<T>,
- [(1.2)](#1.2)
is_copy_constructible_v<T>,
- [(1.3)](#1.3)
is_move_constructible_v<T>,
- [(1.4)](#1.4)
is_copy_assignable_v<T>,
- [(1.5)](#1.5)
is_move_assignable_v<T>, or
- [(1.6)](#1.6)
same_as<T, remove_cv_t<T>>,
is false[.](#1.sentence-2)
[*Note [1](#note-1)*:
Type arguments that are
not also statically initializable can be difficult to use[.](#1.sentence-3)
— *end note*]
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4516)
The specialization atomic<bool> is a standard-layout struct[.](#2.sentence-1)
It has a trivial destructor[.](#2.sentence-2)
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4520)
[*Note [2](#note-2)*:
The representation of an atomic specialization
need not have the same size and alignment requirement as
its corresponding argument type[.](#3.sentence-1)
— *end note*]

View File

@@ -0,0 +1,227 @@
[atomics.types.int]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#types.int)
### 32.5.8 Class template atomic [[atomics.types.generic]](atomics.types.generic#atomics.types.int)
#### 32.5.8.3 Specializations for integers [atomics.types.int]
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5012)
There are specializations of the atomic class template for the integral typeschar,signed char,unsigned char,short,unsigned short,int,unsigned int,long,unsigned long,long long,unsigned long long,char8_t,char16_t,char32_t,wchar_t,
and any other types needed by the typedefs in the header [<cstdint>](cstdint.syn#header:%3ccstdint%3e "17.4.1Header <cstdint> synopsis[cstdint.syn]")[.](#1.sentence-1)
For each such type *integral-type*, the specializationatomic<*integral-type*> provides additional atomic operations appropriate to integral types[.](#1.sentence-2)
[*Note [1](#note-1)*:
The specialization atomic<bool> uses the primary template ([[atomics.types.generic]](atomics.types.generic "32.5.8Class template atomic"))[.](#1.sentence-3)
— *end note*]
namespace std {template<> struct atomic<*integral-type*> {using value_type = *integral-type*; using difference_type = value_type; static constexpr bool is_always_lock_free = *implementation-defined*; bool is_lock_free() const volatile noexcept; bool is_lock_free() const noexcept; constexpr atomic() noexcept; constexpr atomic(*integral-type*) noexcept;
atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
atomic& operator=(const atomic&) volatile = delete; void store(*integral-type*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store(*integral-type*, memory_order = memory_order::seq_cst) noexcept; *integral-type* operator=(*integral-type*) volatile noexcept; constexpr *integral-type* operator=(*integral-type*) noexcept; *integral-type* load(memory_order = memory_order::seq_cst) const volatile noexcept; constexpr *integral-type* load(memory_order = memory_order::seq_cst) const noexcept; operator *integral-type*() const volatile noexcept; constexpr operator *integral-type*() const noexcept; *integral-type* exchange(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *integral-type* exchange(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_weak(*integral-type*&, *integral-type*,
memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_weak(*integral-type*&, *integral-type*,
memory_order, memory_order) noexcept; bool compare_exchange_strong(*integral-type*&, *integral-type*,
memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_strong(*integral-type*&, *integral-type*,
memory_order, memory_order) noexcept; bool compare_exchange_weak(*integral-type*&, *integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_weak(*integral-type*&, *integral-type*,
memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_strong(*integral-type*&, *integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_strong(*integral-type*&, *integral-type*,
memory_order = memory_order::seq_cst) noexcept; *integral-type* fetch_add(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *integral-type* fetch_add(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; *integral-type* fetch_sub(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *integral-type* fetch_sub(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; *integral-type* fetch_and(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *integral-type* fetch_and(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; *integral-type* fetch_or(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *integral-type* fetch_or(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; *integral-type* fetch_xor(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *integral-type* fetch_xor(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; *integral-type* fetch_max(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *integral-type* fetch_max(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; *integral-type* fetch_min(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr *integral-type* fetch_min(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; void store_add(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_add(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; void store_sub(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_sub(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; void store_and(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_and(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; void store_or(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_or(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; void store_xor(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_xor(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; void store_max(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_max(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; void store_min(*integral-type*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_min(*integral-type*,
memory_order = memory_order::seq_cst) noexcept; *integral-type* operator++(int) volatile noexcept; constexpr *integral-type* operator++(int) noexcept; *integral-type* operator--(int) volatile noexcept; constexpr *integral-type* operator--(int) noexcept; *integral-type* operator++() volatile noexcept; constexpr *integral-type* operator++() noexcept; *integral-type* operator--() volatile noexcept; constexpr *integral-type* operator--() noexcept; *integral-type* operator+=(*integral-type*) volatile noexcept; constexpr *integral-type* operator+=(*integral-type*) noexcept; *integral-type* operator-=(*integral-type*) volatile noexcept; constexpr *integral-type* operator-=(*integral-type*) noexcept; *integral-type* operator&=(*integral-type*) volatile noexcept; constexpr *integral-type* operator&=(*integral-type*) noexcept; *integral-type* operator|=(*integral-type*) volatile noexcept; constexpr *integral-type* operator|=(*integral-type*) noexcept; *integral-type* operator^=(*integral-type*) volatile noexcept; constexpr *integral-type* operator^=(*integral-type*) noexcept; void wait(*integral-type*, memory_order = memory_order::seq_cst) const volatile noexcept; constexpr void wait(*integral-type*, memory_order = memory_order::seq_cst) const noexcept; void notify_one() volatile noexcept; constexpr void notify_one() noexcept; void notify_all() volatile noexcept; constexpr void notify_all() noexcept; };}
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5171)
The atomic integral specializations
are standard-layout structs[.](#2.sentence-1)
They each have
a trivial destructor[.](#2.sentence-2)
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5177)
Descriptions are provided below only for members that differ from the primary template[.](#3.sentence-1)
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5180)
The following operations perform arithmetic computations[.](#4.sentence-1)
The correspondence among key, operator, and computation is specified
in Table [155](#tab:atomic.types.int.comp "Table 155: Atomic arithmetic computations")[.](#4.sentence-2)
Table [155](#tab:atomic.types.int.comp) — Atomic arithmetic computations [[tab:atomic.types.int.comp]](./tab:atomic.types.int.comp)
| [🔗](#tab:atomic.types.int.comp-row-1)<br>***key*** | **Op** | **Computation** | ***key*** | **Op** | **Computation** |
| --- | --- | --- | --- | --- | --- |
| [🔗](#tab:atomic.types.int.comp-row-2)<br>add | + | addition | and | & | bitwise and |
| [🔗](#tab:atomic.types.int.comp-row-3)<br>sub | - | subtraction | or | | | bitwise inclusive or |
| [🔗](#tab:atomic.types.int.comp-row-4)<br>max | | maximum | xor | ^ | bitwise exclusive or |
| [🔗](#tab:atomic.types.int.comp-row-5)<br>min | | minimum | | | |
[🔗](#lib:atomic_fetch_add)
`integral-type fetch_key(integral-type operand,
memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr integral-type fetch_key(integral-type operand,
memory_order order = memory_order::seq_cst) noexcept;
`
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5246)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#5.sentence-1)
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5251)
*Effects*: Atomically replaces the value pointed to bythis with the result of the computation applied to the
value pointed to by this and the given operand[.](#6.sentence-1)
Memory is affected according to the value of order[.](#6.sentence-2)
These operations are atomic read-modify-write operations ([[intro.multithread]](intro.multithread "6.10.2Multi-threaded executions and data races"))[.](#6.sentence-3)
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5259)
*Returns*: Atomically, the value pointed to by this immediately before the effects[.](#7.sentence-1)
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5263)
*Remarks*: Except for fetch_max and fetch_min, for signed integer types
the result is as if the object value and parameters
were converted to their corresponding unsigned types,
the computation performed on those types, and
the result converted back to the signed type[.](#8.sentence-1)
[*Note [2](#note-2)*:
There are no undefined results arising from the computation[.](#8.sentence-2)
— *end note*]
[9](#9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5275)
For fetch_max and fetch_min, the maximum and minimum
computation is performed as if by max and min algorithms ([[alg.min.max]](alg.min.max "26.8.9Minimum and maximum")), respectively,
with the object value and the first parameter as the arguments[.](#9.sentence-1)
[🔗](#lib:atomic_store_add)
`void store_key(integral-type operand,
memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store_key(integral-type operand,
memory_order order = memory_order::seq_cst) noexcept;
`
[10](#10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5308)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#10.sentence-1)
[11](#11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5313)
*Preconditions*: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst[.](#11.sentence-1)
[12](#12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5320)
*Effects*: Atomically replaces the value pointed to by this with the result of the computation applied to
the value pointed to by this and the given operand[.](#12.sentence-1)
Memory is affected according to the value of order[.](#12.sentence-2)
These operations are atomic modify-write operations ([[atomics.order]](atomics.order "32.5.4Order and consistency"))[.](#12.sentence-3)
[13](#13)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5328)
*Remarks*: Except for store_max and store_min,
for signed integer types, the result is as if
the value pointed to by this and parameters
were converted to their corresponding unsigned types,
the computation performed on those types, and
the result converted back to the signed type[.](#13.sentence-1)
[*Note [3](#note-3)*:
There are no undefined results arising from the computation[.](#13.sentence-2)
— *end note*]
For store_max and store_min,
the maximum and minimum computation is performed
as if by max and min algorithms ([[alg.min.max]](alg.min.max "26.8.9Minimum and maximum")), respectively,
with the value pointed to by this and the first parameter as the arguments[.](#13.sentence-3)
[🔗](#lib:operator+=,atomic%3cT*%3e)
`integral-type operator op=(integral-type operand) volatile noexcept;
constexpr integral-type operator op=(integral-type operand) noexcept;
`
[14](#14)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5358)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#14.sentence-1)
[15](#15)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5363)
*Effects*: Equivalent to: return fetch_*key*(operand) *op* operand;

View File

@@ -0,0 +1,81 @@
[atomics.types.memop]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#types.memop)
### 32.5.8 Class template atomic [[atomics.types.generic]](atomics.types.generic#atomics.types.memop)
#### 32.5.8.6 Member operators common to integers and pointers to objects [atomics.types.memop]
[🔗](#lib:operator++,atomic%3cT*%3e)
`value_type operator++(int) volatile noexcept;
constexpr value_type operator++(int) noexcept;
`
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6003)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#1.sentence-1)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6008)
*Effects*: Equivalent to: return fetch_add(1);
[🔗](#lib:operator--,atomic%3cT*%3e)
`value_type operator--(int) volatile noexcept;
constexpr value_type operator--(int) noexcept;
`
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6021)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#3.sentence-1)
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6026)
*Effects*: Equivalent to: return fetch_sub(1);
[🔗](#lib:operator++,atomic%3cT*%3e_)
`value_type operator++() volatile noexcept;
constexpr value_type operator++() noexcept;
`
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6039)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#5.sentence-1)
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6044)
*Effects*: Equivalent to: return fetch_add(1) + 1;
[🔗](#lib:operator--,atomic%3cT*%3e_)
`value_type operator--() volatile noexcept;
constexpr value_type operator--() noexcept;
`
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6057)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#7.sentence-1)
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L6062)
*Effects*: Equivalent to: return fetch_sub(1) - 1;

View File

@@ -0,0 +1,472 @@
[atomics.types.operations]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#types.operations)
### 32.5.8 Class template atomic [[atomics.types.generic]](atomics.types.generic#atomics.types.operations)
#### 32.5.8.2 Operations on atomic types [atomics.types.operations]
[🔗](#lib:atomic,constructor)
`constexpr atomic() noexcept(is_nothrow_default_constructible_v<T>);
`
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4538)
*Constraints*: is_default_constructible_v<T> is true[.](#1.sentence-1)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4542)
*Effects*: Initializes the atomic object with the value of T()[.](#2.sentence-1)
Initialization is not an atomic operation ([[intro.multithread]](intro.multithread "6.10.2Multi-threaded executions and data races"))[.](#2.sentence-2)
[🔗](#lib:atomic,constructor_)
`constexpr atomic(T desired) noexcept;
`
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4557)
*Effects*: Initializes the object with the value desired[.](#3.sentence-1)
Initialization is not an atomic operation ([[intro.multithread]](intro.multithread "6.10.2Multi-threaded executions and data races"))[.](#3.sentence-2)
[*Note [1](#note-1)*:
It is possible to have an access to an atomic object A race with its construction, for example by communicating the address of the
just-constructed object A to another thread viamemory_order::relaxed operations on a suitable atomic pointer
variable, and then immediately accessing A in the receiving thread[.](#3.sentence-3)
This results in undefined behavior[.](#3.sentence-4)
— *end note*]
[🔗](#lib:is_always_lock_free,atomic)
`static constexpr bool is_always_lock_free = implementation-defined;
`
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4582)
The static data member is_always_lock_free is true if the atomic type's operations are always lock-free, and false otherwise[.](#4.sentence-1)
[*Note [2](#note-2)*:
The value of is_always_lock_free is consistent with the value of
the corresponding ATOMIC_..._LOCK_FREE macro, if defined[.](#4.sentence-2)
— *end note*]
[🔗](#lib:atomic_is_lock_free)
`bool is_lock_free() const volatile noexcept;
bool is_lock_free() const noexcept;
`
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4604)
*Returns*: true if the object's operations are lock-free, false otherwise[.](#5.sentence-1)
[*Note [3](#note-3)*:
The return value of the is_lock_free member function
is consistent with the value of is_always_lock_free for the same type[.](#5.sentence-2)
— *end note*]
[🔗](#lib:atomic_store)
`void store(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store(T desired, memory_order order = memory_order::seq_cst) noexcept;
`
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4625)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#6.sentence-1)
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4630)
*Preconditions*: order ismemory_order::relaxed,memory_order::release, ormemory_order::seq_cst[.](#7.sentence-1)
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4637)
*Effects*: Atomically replaces the value pointed to by this with the value of desired[.](#8.sentence-1)
Memory is affected according to the value oforder[.](#8.sentence-2)
[🔗](#lib:operator=,atomic)
`T operator=(T desired) volatile noexcept;
constexpr T operator=(T desired) noexcept;
`
[9](#9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4654)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#9.sentence-1)
[10](#10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4659)
*Effects*: Equivalent to store(desired)[.](#10.sentence-1)
[11](#11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4663)
*Returns*: desired[.](#11.sentence-1)
[🔗](#lib:atomic_load)
`T load(memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr T load(memory_order order = memory_order::seq_cst) const noexcept;
`
[12](#12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4680)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#12.sentence-1)
[13](#13)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4685)
*Preconditions*: order ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst[.](#13.sentence-1)
[14](#14)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4692)
*Effects*: Memory is affected according to the value of order[.](#14.sentence-1)
[15](#15)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4696)
*Returns*: Atomically returns the value pointed to by this[.](#15.sentence-1)
[🔗](#lib:operator_type,atomic)
`operator T() const volatile noexcept;
constexpr operator T() const noexcept;
`
[16](#16)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4711)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#16.sentence-1)
[17](#17)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4716)
*Effects*: Equivalent to: return load();
[🔗](#lib:atomic_exchange)
`T exchange(T desired, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T exchange(T desired, memory_order order = memory_order::seq_cst) noexcept;
`
[18](#18)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4734)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#18.sentence-1)
[19](#19)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4739)
*Effects*: Atomically replaces the value pointed to by this with desired[.](#19.sentence-1)
Memory is affected according to the value of order[.](#19.sentence-2)
These operations are atomic read-modify-write operations ([[intro.multithread]](intro.multithread "6.10.2Multi-threaded executions and data races"))[.](#19.sentence-3)
[20](#20)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4746)
*Returns*: Atomically returns the value pointed to by this immediately before the effects[.](#20.sentence-1)
[🔗](#lib:atomic_compare_exchange_weak)
`bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) volatile noexcept;
constexpr bool compare_exchange_weak(T& expected, T desired,
memory_order success, memory_order failure) noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) volatile noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
memory_order success, memory_order failure) noexcept;
bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_weak(T& expected, T desired,
memory_order order = memory_order::seq_cst) noexcept;
bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr bool compare_exchange_strong(T& expected, T desired,
memory_order order = memory_order::seq_cst) noexcept;
`
[21](#21)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4783)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#21.sentence-1)
[22](#22)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4788)
*Preconditions*: failure ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst[.](#22.sentence-1)
[23](#23)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4795)
*Effects*: Retrieves the value in expected[.](#23.sentence-1)
It then atomically
compares the value representation of the value pointed to by this for equality with that previously retrieved from expected,
and if true, replaces the value pointed to
by this with that in desired[.](#23.sentence-2)
If and only if the comparison is true, memory is affected according to the
value of success, and if the comparison is false, memory is affected according
to the value of failure[.](#23.sentence-3)
When only one memory_order argument is
supplied, the value of success is order, and the value offailure is order except that a value of memory_order::acq_rel shall be replaced by the value memory_order::acquire and a value ofmemory_order::release shall be replaced by the valuememory_order::relaxed[.](#23.sentence-4)
If and only if the comparison is false then, after the atomic operation,
the value in expected is replaced by the value
pointed to by this during the atomic comparison[.](#23.sentence-5)
If the operation returns true, these
operations are atomic read-modify-write
operations ([[intro.multithread]](intro.multithread "6.10.2Multi-threaded executions and data races")) on the memory
pointed to by this[.](#23.sentence-6)
Otherwise, these operations are atomic load operations on that memory[.](#23.sentence-7)
[24](#24)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4819)
*Returns*: The result of the comparison[.](#24.sentence-1)
[25](#25)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4823)
[*Note [4](#note-4)*:
For example, the effect ofcompare_exchange_strong on objects without padding bits ([[basic.types.general]](basic.types.general#term.padding.bits "6.9.1General")) isif (memcmp(this, &expected, sizeof(*this)) == 0) memcpy(this, &desired, sizeof(*this));else memcpy(&expected, this, sizeof(*this));
— *end note*]
[*Example [1](#example-1)*:
The expected use of the compare-and-exchange operations is as follows[.](#25.sentence-2)
The
compare-and-exchange operations will update expected when another iteration of
the loop is needed[.](#25.sentence-3)
expected = current.load();do { desired = function(expected);} while (!current.compare_exchange_weak(expected, desired)); — *end example*]
[*Example [2](#example-2)*:
Because the expected value is updated only on failure,
code releasing the memory containing the expected value on success will work[.](#25.sentence-4)
For example, list head insertion will act atomically and would not introduce a
data race in the following code:do { p->next = head; // make new list node point to the current head} while (!head.compare_exchange_weak(p->next, p)); // try to insert
— *end example*]
[26](#26)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4858)
Implementations should ensure that weak compare-and-exchange operations do not
consistently return false unless either the atomic object has value
different from expected or there are concurrent modifications to the
atomic object[.](#26.sentence-1)
[27](#27)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4864)
*Remarks*: A weak compare-and-exchange operation may fail spuriously[.](#27.sentence-1)
That is, even when
the contents of memory referred to by expected and this are
equal, it may return false and store back to expected the same memory
contents that were originally there[.](#27.sentence-2)
[*Note [5](#note-5)*:
This
spurious failure enables implementation of compare-and-exchange on a broader class of
machines, e.g., load-locked store-conditional machines[.](#27.sentence-3)
A
consequence of spurious failure is that nearly all uses of weak compare-and-exchange
will be in a loop[.](#27.sentence-4)
When a compare-and-exchange is in a loop, the weak version will yield better performance
on some platforms[.](#27.sentence-5)
When a weak compare-and-exchange would require a loop and a strong one
would not, the strong one is preferable[.](#27.sentence-6)
— *end note*]
[28](#28)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4881)
[*Note [6](#note-6)*:
Under cases where the memcpy and memcmp semantics of the compare-and-exchange
operations apply, the comparisons can fail for values that compare equal withoperator== if the value representation has trap bits or alternate
representations of the same value[.](#28.sentence-1)
Notably, on implementations conforming to
ISO/IEC 60559, floating-point -0.0 and +0.0 will not compare equal with memcmp but will compare equal with operator==,
and NaNs with the same payload will compare equal with memcmp but will not
compare equal with operator==[.](#28.sentence-2)
— *end note*]
[*Note [7](#note-7)*:
Because compare-and-exchange acts on an object's value representation,
padding bits that never participate in the object's value representation
are ignored[.](#28.sentence-3)
As a consequence, the following code is guaranteed to avoid
spurious failure:struct padded {char clank = 0x42; // Padding here.unsigned biff = 0xC0DEFEFE;};
atomic<padded> pad = {};
bool zap() { padded expected, desired{0, 0}; return pad.compare_exchange_strong(expected, desired);}
— *end note*]
[*Note [8](#note-8)*:
For a union with bits that participate in the value representation
of some members but not others, compare-and-exchange might always fail[.](#28.sentence-5)
This is because such padding bits have an indeterminate value when they
do not participate in the value representation of the active member[.](#28.sentence-6)
As a consequence, the following code is not guaranteed to ever succeed:union pony {double celestia = 0.; short luna; // padded};
atomic<pony> princesses = {};
bool party(pony desired) { pony expected; return princesses.compare_exchange_strong(expected, desired);}
— *end note*]
[🔗](#lib:wait,atomic)
`void wait(T old, memory_order order = memory_order::seq_cst) const volatile noexcept;
constexpr void wait(T old, memory_order order = memory_order::seq_cst) const noexcept;
`
[29](#29)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4942)
*Preconditions*: order ismemory_order::relaxed,memory_order::acquire, ormemory_order::seq_cst[.](#29.sentence-1)
[30](#30)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4949)
*Effects*: Repeatedly performs the following steps, in order:
- [(30.1)](#30.1)
Evaluates load(order) and
compares its value representation for equality against that of old[.](#30.1.sentence-1)
- [(30.2)](#30.2)
If they compare unequal, returns[.](#30.2.sentence-1)
- [(30.3)](#30.3)
Blocks until it
is unblocked by an atomic notifying operation or is unblocked spuriously[.](#30.3.sentence-1)
[31](#31)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4963)
*Remarks*: This function is an atomic waiting operation ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying"))[.](#31.sentence-1)
[🔗](#lib:notify_one,atomic)
`void notify_one() volatile noexcept;
constexpr void notify_one() noexcept;
`
[32](#32)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4978)
*Effects*: Unblocks the execution of at least one atomic waiting operation
that is eligible to be unblocked ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying")) by this call,
if any such atomic waiting operations exist[.](#32.sentence-1)
[33](#33)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4984)
*Remarks*: This function is an atomic notifying operation ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying"))[.](#33.sentence-1)
[🔗](#lib:notify_all,atomic)
`void notify_all() volatile noexcept;
constexpr void notify_all() noexcept;
`
[34](#34)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L4999)
*Effects*: Unblocks the execution of all atomic waiting operations
that are eligible to be unblocked ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying")) by this call[.](#34.sentence-1)
[35](#35)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5004)
*Remarks*: This function is an atomic notifying operation ([[atomics.wait]](atomics.wait "32.5.6Waiting and notifying"))[.](#35.sentence-1)

View File

@@ -0,0 +1,204 @@
[atomics.types.pointer]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#types.pointer)
### 32.5.8 Class template atomic [[atomics.types.generic]](atomics.types.generic#atomics.types.pointer)
#### 32.5.8.5 Partial specialization for pointers [atomics.types.pointer]
namespace std {template<class T> struct atomic<T*> {using value_type = T*; using difference_type = ptrdiff_t; static constexpr bool is_always_lock_free = *implementation-defined*; bool is_lock_free() const volatile noexcept; bool is_lock_free() const noexcept; constexpr atomic() noexcept; constexpr atomic(T*) noexcept;
atomic(const atomic&) = delete;
atomic& operator=(const atomic&) = delete;
atomic& operator=(const atomic&) volatile = delete; void store(T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store(T*, memory_order = memory_order::seq_cst) noexcept;
T* operator=(T*) volatile noexcept; constexpr T* operator=(T*) noexcept;
T* load(memory_order = memory_order::seq_cst) const volatile noexcept; constexpr T* load(memory_order = memory_order::seq_cst) const noexcept; operator T*() const volatile noexcept; constexpr operator T*() const noexcept;
T* exchange(T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T* exchange(T*, memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_weak(T*&, T*, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_weak(T*&, T*, memory_order, memory_order) noexcept; bool compare_exchange_strong(T*&, T*, memory_order, memory_order) volatile noexcept; constexpr bool compare_exchange_strong(T*&, T*, memory_order, memory_order) noexcept; bool compare_exchange_weak(T*&, T*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_weak(T*&, T*,
memory_order = memory_order::seq_cst) noexcept; bool compare_exchange_strong(T*&, T*,
memory_order = memory_order::seq_cst) volatile noexcept; constexpr bool compare_exchange_strong(T*&, T*,
memory_order = memory_order::seq_cst) noexcept;
T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T* fetch_add(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T* fetch_sub(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept;
T* fetch_max(T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T* fetch_max(T*, memory_order = memory_order::seq_cst) noexcept;
T* fetch_min(T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr T* fetch_min(T*, memory_order = memory_order::seq_cst) noexcept; void store_add(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_add(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept; void store_sub(ptrdiff_t, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_sub(ptrdiff_t, memory_order = memory_order::seq_cst) noexcept; void store_max(T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_max(T*, memory_order = memory_order::seq_cst) noexcept; void store_min(T*, memory_order = memory_order::seq_cst) volatile noexcept; constexpr void store_min(T*, memory_order = memory_order::seq_cst) noexcept;
T* operator++(int) volatile noexcept; constexpr T* operator++(int) noexcept;
T* operator--(int) volatile noexcept; constexpr T* operator--(int) noexcept;
T* operator++() volatile noexcept; constexpr T* operator++() noexcept;
T* operator--() volatile noexcept; constexpr T* operator--() noexcept;
T* operator+=(ptrdiff_t) volatile noexcept; constexpr T* operator+=(ptrdiff_t) noexcept;
T* operator-=(ptrdiff_t) volatile noexcept; constexpr T* operator-=(ptrdiff_t) noexcept; void wait(T*, memory_order = memory_order::seq_cst) const volatile noexcept; constexpr void wait(T*, memory_order = memory_order::seq_cst) const noexcept; void notify_one() volatile noexcept; constexpr void notify_one() noexcept; void notify_all() volatile noexcept; constexpr void notify_all() noexcept; };}
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5825)
There is a partial specialization of the atomic class template for pointers[.](#1.sentence-1)
Specializations of this partial specialization are standard-layout structs[.](#1.sentence-2)
They each have a trivial destructor[.](#1.sentence-3)
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5830)
Descriptions are provided below only for members that differ from the primary template[.](#2.sentence-1)
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5833)
The following operations perform pointer arithmetic[.](#3.sentence-1)
The correspondence among key, operator, and computation is specified
in Table [156](#tab:atomic.types.pointer.comp "Table 156: Atomic pointer computations")[.](#3.sentence-2)
Table [156](#tab:atomic.types.pointer.comp) — Atomic pointer computations [[tab:atomic.types.pointer.comp]](./tab:atomic.types.pointer.comp)
| [🔗](#tab:atomic.types.pointer.comp-row-1)<br>***key*** | **Op** | **Computation** | ***key*** | **Op** | **Computation** |
| --- | --- | --- | --- | --- | --- |
| [🔗](#tab:atomic.types.pointer.comp-row-2)<br>add | + | addition | sub | - | subtraction |
| [🔗](#tab:atomic.types.pointer.comp-row-3)<br>max | | maximum | min | | minimum |
[🔗](#lib:atomic_fetch_add)
`T* fetch_key(ptrdiff_t operand, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr T* fetch_key(ptrdiff_t operand, memory_order order = memory_order::seq_cst) noexcept;
`
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5879)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#4.sentence-1)
[5](#5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5884)
*Mandates*: T is a complete object type[.](#5.sentence-1)
[*Note [1](#note-1)*:
Pointer arithmetic on void* or function pointers is ill-formed[.](#5.sentence-2)
— *end note*]
[6](#6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5891)
*Effects*: Atomically replaces the value pointed to bythis with the result of the computation applied to the
value pointed to by this and the given operand[.](#6.sentence-1)
Memory is affected according to the value of order[.](#6.sentence-2)
These operations are atomic read-modify-write operations ([[intro.multithread]](intro.multithread "6.10.2Multi-threaded executions and data races"))[.](#6.sentence-3)
[7](#7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5899)
*Returns*: Atomically, the value pointed to by this immediately before the effects[.](#7.sentence-1)
[8](#8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5903)
*Remarks*: The result may be an undefined address,
but the operations otherwise have no undefined behavior[.](#8.sentence-1)
[9](#9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5908)
For fetch_max and fetch_min, the maximum and minimum
computation is performed as if by max and min algorithms ([[alg.min.max]](alg.min.max "26.8.9Minimum and maximum")), respectively, with the object value and the first
parameter as the arguments[.](#9.sentence-1)
[*Note [2](#note-2)*:
If the pointers point to different complete objects (or subobjects thereof),
the < operator does not establish a strict weak ordering
(Table [29](utility.arg.requirements#tab:cpp17.lessthancomparable "Table 29: Cpp17LessThanComparable requirements"), [[expr.rel]](expr.rel "7.6.9Relational operators"))[.](#9.sentence-2)
— *end note*]
[🔗](#lib:atomic_store_add)
`void store_key(see above operand, memory_order order = memory_order::seq_cst) volatile noexcept;
constexpr void store_key(see above operand, memory_order order = memory_order::seq_cst) noexcept;
`
[10](#10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5939)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#10.sentence-1)
[11](#11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5944)
*Mandates*: T is a complete object type[.](#11.sentence-1)
[*Note [3](#note-3)*:
Pointer arithmetic on void* or function pointers is ill-formed[.](#11.sentence-2)
— *end note*]
[12](#12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5951)
*Effects*: Atomically replaces the value pointed to by this with the result of the computation applied to
the value pointed to by this and the given operand[.](#12.sentence-1)
Memory is affected according to the value of order[.](#12.sentence-2)
These operations are atomic modify-write operations ([[atomics.order]](atomics.order "32.5.4Order and consistency"))[.](#12.sentence-3)
[13](#13)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5959)
*Remarks*: The result may be an undefined address,
but the operations otherwise have no undefined behavior[.](#13.sentence-1)
For store_max and store_min,
the maximum and minimum computation is performed
as if by max and min algorithms ([[alg.min.max]](alg.min.max "26.8.9Minimum and maximum")), respectively,
with the value pointed to by this and
the first parameter as the arguments[.](#13.sentence-2)
[*Note [4](#note-4)*:
If the pointers point to different complete objects (or subobjects thereof),
the < operator does not establish
a strict weak ordering (Table [29](utility.arg.requirements#tab:cpp17.lessthancomparable "Table 29: Cpp17LessThanComparable requirements"), [[expr.rel]](expr.rel "7.6.9Relational operators"))[.](#13.sentence-3)
— *end note*]
[🔗](#lib:operator+=,atomic%3cT*%3e)
`T* operator op=(ptrdiff_t operand) volatile noexcept;
constexpr T* operator op=(ptrdiff_t operand) noexcept;
`
[14](#14)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5983)
*Constraints*: For the volatile overload of this function,is_always_lock_free is true[.](#14.sentence-1)
[15](#15)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L5988)
*Effects*: Equivalent to: return fetch_*key*(operand) *op* operand;

103
cppdraft/atomics/wait.md Normal file
View File

@@ -0,0 +1,103 @@
[atomics.wait]
# 32 Concurrency support library [[thread]](./#thread)
## 32.5 Atomic operations [[atomics]](atomics#wait)
### 32.5.6 Waiting and notifying [atomics.wait]
[1](#1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3144)
[*Atomic waiting operations*](#def:atomic,waiting_operation "32.5.6Waiting and notifying[atomics.wait]") and [*atomic notifying operations*](#def:atomic,notifying_operation "32.5.6Waiting and notifying[atomics.wait]") provide a mechanism to wait for the value of an atomic object to change
more efficiently than can be achieved with polling[.](#1.sentence-1)
An atomic waiting operation may block until it is unblocked
by an atomic notifying operation, according to each function's effects[.](#1.sentence-2)
[*Note [1](#note-1)*:
Programs are not guaranteed to observe transient atomic values,
an issue known as the A-B-A problem,
resulting in continued blocking if a condition is only temporarily met[.](#1.sentence-3)
— *end note*]
[2](#2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3157)
[*Note [2](#note-2)*:
The following functions are atomic waiting operations:
- [(2.1)](#2.1)
atomic<T>::wait,
- [(2.2)](#2.2)
atomic_flag::wait,
- [(2.3)](#2.3)
atomic_wait and atomic_wait_explicit,
- [(2.4)](#2.4)
atomic_flag_wait and atomic_flag_wait_explicit, and
- [(2.5)](#2.5)
atomic_ref<T>::wait[.](#2.sentence-1)
— *end note*]
[3](#3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3169)
[*Note [3](#note-3)*:
The following functions are atomic notifying operations:
- [(3.1)](#3.1)
atomic<T>::notify_one and atomic<T>::notify_all,
- [(3.2)](#3.2)
atomic_flag::notify_one and atomic_flag::notify_all,
- [(3.3)](#3.3)
atomic_notify_one and atomic_notify_all,
- [(3.4)](#3.4)
atomic_flag_notify_one and atomic_flag_notify_all, and
- [(3.5)](#3.5)
atomic_ref<T>::notify_one and atomic_ref<T>::notify_all[.](#3.sentence-1)
— *end note*]
[4](#4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/threads.tex#L3182)
A call to an atomic waiting operation on an atomic object M is [*eligible to be unblocked*](#def:eligible_to_be_unblocked "32.5.6Waiting and notifying[atomics.wait]") by a call to an atomic notifying operation on M if there exist side effects X and Y on M such that:
- [(4.1)](#4.1)
the atomic waiting operation has blocked after observing the result of X,
- [(4.2)](#4.2)
X precedes Y in the modification order of M, and
- [(4.3)](#4.3)
Y happens before the call to the atomic notifying operation[.](#4.sentence-1)