8.2 KiB
[atomics.order]
32 Concurrency support library [thread]
32.5 Atomic operations [atomics]
32.5.4 Order and consistency [atomics.order]
namespace std {enum class memory_order : unspecified { relaxed = 0, acquire = 2, release = 3, acq_rel = 4, seq_cst = 5};}
The enumeration memory_order specifies the detailed regular (non-atomic) memory synchronization order as defined in[intro.multithread] and may provide for operation ordering.
Its enumerated values and their meanings are as follows:
-
memory_order::relaxed: no operation orders memory.
-
memory_order::release, memory_order::acq_rel, andmemory_order::seq_cst: a store operation performs a release operation on the affected memory location.
-
memory_order::acquire, memory_order::acq_rel, andmemory_order::seq_cst: a load operation performs an acquire operation on the affected memory location.
[Note 1:
Atomic operations specifying memory_order::relaxed are relaxed with respect to memory ordering.
Implementations must still guarantee that any given atomic access to a particular atomic object be indivisible with respect to all other atomic accesses to that object.
â end note]
An atomic operation A that performs a release operation on an atomic object M synchronizes with an atomic operation B that performs an acquire operation on M and takes its value from any side effect in the release sequence headed by A.
An atomic operation A on some atomic object M iscoherence-ordered before another atomic operation B on M if
A is a modification, andB reads the value stored by A, or
A precedes B in the modification order of M, or
A and B are not the same atomic read-modify-write operation, and there exists an atomic modification X of M such that A reads the value stored by X andX precedes B in the modification order of M, or
there exists an atomic modification X of M such that A is coherence-ordered before X andX is coherence-ordered before B.
There is a single total order S on all memory_order::seq_cst operations, including fences, that satisfies the following constraints.
First, if A and B arememory_order::seq_cst operations andA strongly happens before B, then A precedes B in S.
Second, for every pair of atomic operations A andB on an object M, where A is coherence-ordered before B, the following four conditions are required to be satisfied by S:
if A and B are bothmemory_order::seq_cst operations, then A precedes B in S; and
if A is a memory_order::seq_cst operation andB happens before a memory_order::seq_cst fence Y, then A precedes Y in S; and
if a memory_order::seq_cst fence X happens before A andB is a memory_order::seq_cst operation, then X precedes B in S; and
if a memory_order::seq_cst fence X happens before A andB happens before a memory_order::seq_cst fence Y, then X precedes Y in S.
[Note 2:
This definition ensures that S is consistent with the modification order of any atomic object M.
It also ensures that a memory_order::seq_cst load A of M gets its value either from the last modification of M that precedes A in S or from some non-memory_order::seq_cst modification of M that does not happen before any modification of M that precedes A in S.
â end note]
[Note 3:
We do not require that S be consistent with âhappens beforeâ ([intro.races]).
This allows more efficient implementation of memory_order::acquire and memory_order::release on some machine architectures.
It can produce surprising results when these are mixed with memory_order::seq_cst accesses.
â end note]
[Note 4:
memory_order::seq_cst ensures sequential consistency only for a program that is free of data races and uses exclusively memory_order::seq_cst atomic operations.
Any use of weaker ordering will invalidate this guarantee unless extreme care is used.
In many cases, memory_order::seq_cst atomic operations are reorderable with respect to other atomic operations performed by the same thread.
â end note]
Implementations should ensure that no âout-of-thin-airâ values are computed that circularly depend on their own computation.
[Note 5:
For example, with x and y initially zero,// Thread 1: r1 = y.load(memory_order::relaxed); x.store(r1, memory_order::relaxed);
// Thread 2: r2 = x.load(memory_order::relaxed); y.store(r2, memory_order::relaxed); this recommendation discourages producing r1 == r2 == 42, since the store of 42 to y is only possible if the store to x stores 42, which circularly depends on the store to y storing 42.
Note that without this restriction, such an execution is possible.
â end note]
[Note 6:
The recommendation similarly disallows r1 == r2 == 42 in the following example, with x and y again initially zero:
// Thread 1: r1 = x.load(memory_order::relaxed);if (r1 == 42) y.store(42, memory_order::relaxed);
// Thread 2: r2 = y.load(memory_order::relaxed);if (r2 == 42) x.store(42, memory_order::relaxed); â end note]
Atomic read-modify-write operations shall always read the last value (in the modification order) written before the write associated with the read-modify-write operation.
An atomic modify-write operation is an atomic read-modify-write operation with weaker synchronization requirements as specified in [atomics.fences].
[Note 7:
The intent is for atomic modify-write operations to be implemented using mechanisms that are not ordered, in hardware, by the implementation of acquire fences.
No other semantic or hardware property (e.g., that the mechanism is a far atomic operation) is implied.
â end note]
Recommended practice: The implementation should make atomic stores visible to atomic loads, and atomic loads should observe atomic stores, within a reasonable amount of time.