This commit is contained in:
2025-10-25 03:02:53 +03:00
commit 043225d523
3416 changed files with 681196 additions and 0 deletions

View File

@@ -0,0 +1,856 @@
[intro.multithread]
# 6 Basics [[basic]](./#basic)
## 6.10 Program execution [[basic.exec]](basic.exec#intro.multithread)
### 6.10.2 Multi-threaded executions and data races [intro.multithread]
#### [6.10.2.1](#general) General [[intro.multithread.general]](intro.multithread.general)
[1](#general-1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6564)
A [*thread of execution*](#def:thread_of_execution "6.10.2.1General[intro.multithread.general]") (also known as a [*thread*](#def:thread "6.10.2.1General[intro.multithread.general]")) is a single flow of
control within a program, including the initial invocation of a specific
top-level function, and recursively including every function invocation
subsequently executed by the thread[.](#general-1.sentence-1)
[*Note [1](#general-note-1)*:
When one thread creates another,
the initial call to the top-level function of the new thread is executed by the
new thread, not by the creating thread[.](#general-1.sentence-2)
— *end note*]
Every thread in a program can
potentially use every object and function in a program[.](#general-1.sentence-3)[37](#footnote-37 "An object with automatic or thread storage duration ([basic.stc]) is associated with one specific thread, and can be accessed by a different thread only indirectly through a pointer or reference ([basic.compound]).")
Under a hosted
implementation, a C++ program can have more than one thread running
concurrently[.](#general-1.sentence-4)
The execution of each thread proceeds as defined by the remainder
of this document[.](#general-1.sentence-5)
The execution of the entire program consists of an execution
of all of its threads[.](#general-1.sentence-6)
[*Note [2](#general-note-2)*:
Usually the execution can be viewed as an
interleaving of all its threads[.](#general-1.sentence-7)
However, some kinds of atomic operations, for
example, allow executions inconsistent with a simple interleaving, as described
below[.](#general-1.sentence-8)
— *end note*]
Under a freestanding implementation, it is implementation-defined whether a program can
have more than one thread of execution[.](#general-1.sentence-9)
[2](#general-2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6600)
For a signal handler that is not executed as a result of a call to thestd::raise function, it is unspecified which thread of execution
contains the signal handler invocation[.](#general-2.sentence-1)
[37)](#footnote-37)[37)](#footnoteref-37)
An object
with automatic or thread storage duration ([[basic.stc]](basic.stc "6.8.6Storage duration")) is associated with
one specific thread, and can be accessed by a different thread only indirectly
through a pointer or reference ([[basic.compound]](basic.compound "6.9.4Compound types"))[.](#footnote-37.sentence-1)
#### [6.10.2.2](#intro.races) Data races [[intro.races]](intro.races)
[1](#intro.races-1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6607)
The value of an object visible to a thread T at a particular point is the
initial value of the object, a value assigned to the object by T, or a
value assigned to the object by another thread, according to the rules below[.](#intro.races-1.sentence-1)
[*Note [1](#intro.races-note-1)*:
In some cases, there might instead be undefined behavior[.](#intro.races-1.sentence-2)
Much of this
subclause is motivated by the desire to support atomic operations with explicit
and detailed visibility constraints[.](#intro.races-1.sentence-3)
However, it also implicitly supports a
simpler view for more restricted programs[.](#intro.races-1.sentence-4)
— *end note*]
[2](#intro.races-2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6618)
Two expression evaluations [*conflict*](#def:conflict "6.10.2.2Data races[intro.races]") if one of them
- [(2.1)](#intro.races-2.1)
modifies ([[defns.access]](defns.access "3.1access")) a memory location ([[intro.memory]](intro.memory "6.8.1Memory model")) or
- [(2.2)](#intro.races-2.2)
starts or ends the lifetime of an object in a memory location
and the other one
- [(2.3)](#intro.races-2.3)
reads or modifies the same memory location or
- [(2.4)](#intro.races-2.4)
starts or ends the lifetime of an object occupying storage that
overlaps with the memory location[.](#intro.races-2.sentence-1)
[*Note [2](#intro.races-note-2)*:
A modification can still conflict
even if it does not alter the value of any bits[.](#intro.races-2.sentence-2)
— *end note*]
[3](#intro.races-3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6639)
The library defines a number of atomic operations ([[atomics]](atomics "32.5Atomic operations")) and
operations on mutexes ([[thread]](thread "32Concurrency support library")) that are specially identified as
synchronization operations[.](#intro.races-3.sentence-1)
These operations play a special role in making
assignments in one thread visible to another[.](#intro.races-3.sentence-2)
A synchronization operation on one
or more memory locations is either an acquire operation, a
release operation, or both an acquire and release operation[.](#intro.races-3.sentence-3)
A synchronization
operation without an associated memory location is a fence and can be either an
acquire fence, a release fence, or both an acquire and release fence[.](#intro.races-3.sentence-4)
In
addition, there are relaxed atomic operations, which are not synchronization
operations, and atomic read-modify-write operations, which have special
characteristics[.](#intro.races-3.sentence-5)
[*Note [3](#intro.races-note-3)*:
For example, a call that acquires a mutex will
perform an acquire operation on the locations comprising the mutex[.](#intro.races-3.sentence-6)
Correspondingly, a call that releases the same mutex will perform a release
operation on those same locations[.](#intro.races-3.sentence-7)
Informally, performing a release operation onA forces priorside effects on other memory locations to become visible
to other threads that later perform a consume or an acquire operation onA[.](#intro.races-3.sentence-8)
“Relaxed” atomic operations are not synchronization operations even
though, like synchronization operations, they cannot contribute to data races[.](#intro.races-3.sentence-9)
— *end note*]
[4](#intro.races-4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6664)
All modifications to a particular atomic object M occur in some
particular total order, called the [*modification order*](#def:modification_order "6.10.2.2Data races[intro.races]") of M[.](#intro.races-4.sentence-1)
[*Note [4](#intro.races-note-4)*:
There is a separate order for each
atomic object[.](#intro.races-4.sentence-2)
There is no requirement that these can be combined into a single
total order for all objects[.](#intro.races-4.sentence-3)
In general this will be impossible since different
threads can observe modifications to different objects in inconsistent orders[.](#intro.races-4.sentence-4)
— *end note*]
[5](#intro.races-5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6674)
A [*release sequence*](#def:release_sequence "6.10.2.2Data races[intro.races]") headed
by a release operation A on an atomic object M is a maximal contiguous sub-sequence ofside effects in the modification order of M,
where the first operation is A, and
every subsequent operation is an atomic read-modify-write operation[.](#intro.races-5.sentence-1)
[6](#intro.races-6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6683)
Certain library calls [*synchronize with*](#def:synchronize_with "6.10.2.2Data races[intro.races]") other library calls performed by
another thread[.](#intro.races-6.sentence-1)
For example, an atomic store-release synchronizes with a
load-acquire that takes its value from the store ([[atomics.order]](atomics.order "32.5.4Order and consistency"))[.](#intro.races-6.sentence-2)
[*Note [5](#intro.races-note-5)*:
Except in the specified cases, reading a later value does not
necessarily ensure visibility as described below[.](#intro.races-6.sentence-3)
Such a requirement would
sometimes interfere with efficient implementation[.](#intro.races-6.sentence-4)
— *end note*]
[*Note [6](#intro.races-note-6)*:
The
specifications of the synchronization operations define when one reads the value
written by another[.](#intro.races-6.sentence-5)
For atomic objects, the definition is clear[.](#intro.races-6.sentence-6)
All operations
on a given mutex occur in a single total order[.](#intro.races-6.sentence-7)
Each mutex acquisition “reads
the value written” by the last mutex release[.](#intro.races-6.sentence-8)
— *end note*]
[7](#intro.races-7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6700)
An evaluation A [*happens before*](#def:happens_before "6.10.2.2Data races[intro.races]") an evaluation B (or, equivalently, B happens after A)
if either
- [(7.1)](#intro.races-7.1)
A is sequenced before B, or
- [(7.2)](#intro.races-7.2)
A synchronizes with B, or
- [(7.3)](#intro.races-7.3)
A happens before X and X happens before B[.](#intro.races-7.sentence-1)
[*Note [7](#intro.races-note-7)*:
An evaluation does not happen before itself[.](#intro.races-7.sentence-2)
— *end note*]
[8](#intro.races-8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6713)
An evaluation A [*strongly happens before*](#def:strongly_happens_before "6.10.2.2Data races[intro.races]") an evaluation D if, either
- [(8.1)](#intro.races-8.1)
A is sequenced before D, or
- [(8.2)](#intro.races-8.2)
A synchronizes with D, and
both A and D are
sequentially consistent atomic operations ([[atomics.order]](atomics.order "32.5.4Order and consistency")), or
- [(8.3)](#intro.races-8.3)
there are evaluations B and C such that A is sequenced before B,B happens before C, andC is sequenced before D, or
- [(8.4)](#intro.races-8.4)
there is an evaluation B such thatA strongly happens before B, andB strongly happens before D[.](#intro.races-8.sentence-1)
[*Note [8](#intro.races-note-8)*:
Informally, if A strongly happens before B,
then A appears to be evaluated before B in all contexts[.](#intro.races-8.sentence-2)
— *end note*]
[9](#intro.races-9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6735)
A [*visible side effect*](#def:side_effects,visible "6.10.2.2Data races[intro.races]") A on a scalar object or bit-field M with respect to a value computation B of M satisfies the
conditions:
- [(9.1)](#intro.races-9.1)
A happens before B and
- [(9.2)](#intro.races-9.2)
there is no otherside effect X to M such that A happens before X and X happens before B[.](#intro.races-9.sentence-1)
The value of a non-atomic scalar object or bit-field M, as determined by
evaluation B, is the value stored by thevisible side effect A[.](#intro.races-9.sentence-2)
[*Note [9](#intro.races-note-9)*:
If there is ambiguity about which side effect to a
non-atomic object or bit-field is visible, then the behavior is either
unspecified or undefined[.](#intro.races-9.sentence-3)
— *end note*]
[*Note [10](#intro.races-note-10)*:
This states that operations on
ordinary objects are not visibly reordered[.](#intro.races-9.sentence-4)
This is not actually detectable
without data races, but is needed to ensure that data races, as defined
below, and with suitable restrictions on the use of atomics, correspond to data
races in a simple interleaved (sequentially consistent) execution[.](#intro.races-9.sentence-5)
— *end note*]
[10](#intro.races-10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6764)
The value of an
atomic object M, as determined by evaluation B, is the value
stored by some unspecified
side effect A that modifies M, where B does not happen
before A[.](#intro.races-10.sentence-1)
[*Note [11](#intro.races-note-11)*:
The set of such side effects is also restricted by the rest of the rules
described here, and in particular, by the coherence requirements below[.](#intro.races-10.sentence-2)
— *end note*]
[11](#intro.races-11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6775)
If an operation A that modifies an atomic object M happens before
an operation B that modifies M, then A is earlier
than B in the modification order of M[.](#intro.races-11.sentence-1)
[*Note [12](#intro.races-note-12)*:
This requirement is known as write-write coherence[.](#intro.races-11.sentence-2)
— *end note*]
[12](#intro.races-12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6784)
If avalue computation A of an atomic object M happens before a
value computation B of M, and A takes its value from a side
effect X on M, then the value computed by B is either
the value stored by X or the value stored by aside effect Y on M,
where Y follows X in the modification order of M[.](#intro.races-12.sentence-1)
[*Note [13](#intro.races-note-13)*:
This requirement is known as read-read coherence[.](#intro.races-12.sentence-2)
— *end note*]
[13](#intro.races-13)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6799)
If avalue computation A of an atomic object M happens before an
operation B that modifies M, then A takes its value from a side
effect X on M, where X precedes B in the
modification order of M[.](#intro.races-13.sentence-1)
[*Note [14](#intro.races-note-14)*:
This requirement is known as
read-write coherence[.](#intro.races-13.sentence-2)
— *end note*]
[14](#intro.races-14)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6812)
If aside effect X on an atomic object M happens before a value
computation B of M, then the evaluation B takes its
value from X or from aside effect Y that follows X in the modification order of M[.](#intro.races-14.sentence-1)
[*Note [15](#intro.races-note-15)*:
This requirement is known as write-read coherence[.](#intro.races-14.sentence-2)
— *end note*]
[15](#intro.races-15)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6825)
[*Note [16](#intro.races-note-16)*:
The four preceding coherence requirements effectively disallow
compiler reordering of atomic operations to a single object, even if both
operations are relaxed loads[.](#intro.races-15.sentence-1)
This effectively makes the cache coherence
guarantee provided by most hardware available to C++ atomic operations[.](#intro.races-15.sentence-2)
— *end note*]
[16](#intro.races-16)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6833)
[*Note [17](#intro.races-note-17)*:
The value observed by a load of an atomic depends on the “happens
before” relation, which depends on the values observed by loads of atomics[.](#intro.races-16.sentence-1)
The intended reading is that there must exist an
association of atomic loads with modifications they observe that, together with
suitably chosen modification orders and the “happens before” relation derived
as described above, satisfy the resulting constraints as imposed here[.](#intro.races-16.sentence-2)
— *end note*]
[17](#intro.races-17)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6843)
Two actions are [*potentially concurrent*](#def:potentially_concurrent "6.10.2.2Data races[intro.races]") if
- [(17.1)](#intro.races-17.1)
they are performed by different threads, or
- [(17.2)](#intro.races-17.2)
they are unsequenced, at least one is performed by a signal handler, and
they are not both performed by the same signal handler invocation[.](#intro.races-17.sentence-1)
The execution of a program contains a [*data race*](#def:data_race "6.10.2.2Data races[intro.races]") if it contains two
potentially concurrent conflicting actions, at least one of which is not atomic,
and neither happens before the other,
except for the special case for signal handlers described below[.](#intro.races-17.sentence-2)
Any such data race results in undefined
behavior[.](#intro.races-17.sentence-3)
[*Note [18](#intro.races-note-18)*:
It can be shown that programs that correctly use mutexes
and memory_order::seq_cst operations to prevent all data races and use no
other synchronization operations behave as if the operations executed by their
constituent threads were simply interleaved, with eachvalue computation of an
object being taken from the lastside effect on that object in that
interleaving[.](#intro.races-17.sentence-4)
This is normally referred to as “sequential consistency”[.](#intro.races-17.sentence-5)
However, this applies only to data-race-free programs, and data-race-free
programs cannot observe most program transformations that do not change
single-threaded program semantics[.](#intro.races-17.sentence-6)
In fact, most single-threaded program
transformations remain possible, since any program that behaves
differently as a result has undefined behavior[.](#intro.races-17.sentence-7)
— *end note*]
[18](#intro.races-18)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6874)
Two accesses to the same non-bit-field object
of type volatile std::sig_atomic_t do not
result in a data race if both occur in the same thread, even if one or more
occurs in a signal handler[.](#intro.races-18.sentence-1)
For each signal handler invocation, evaluations
performed by the thread invoking a signal handler can be divided into two
groups A and B, such that no evaluations inB happen before evaluations in A, and the
evaluations of such volatile std::sig_atomic_t objects take values as though
all evaluations in A happened before the execution of the signal
handler and the execution of the signal handler happened before all evaluations
in B[.](#intro.races-18.sentence-2)
[19](#intro.races-19)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6887)
[*Note [19](#intro.races-note-19)*:
Compiler transformations that introduce assignments to a potentially
shared memory location that would not be modified by the abstract machine are
generally precluded by this document, since such an assignment might overwrite
another assignment by a different thread in cases in which an abstract machine
execution would not have encountered a data race[.](#intro.races-19.sentence-1)
This includes implementations
of data member assignment that overwrite adjacent members in separate memory
locations[.](#intro.races-19.sentence-2)
Reordering of atomic loads in cases in which the atomics in question
might alias is also generally precluded, since this could violate the coherence
rules[.](#intro.races-19.sentence-3)
— *end note*]
[20](#intro.races-20)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6900)
[*Note [20](#intro.races-note-20)*:
It is possible that transformations that introduce a speculative read of a potentially
shared memory location do not preserve the semantics of the C++ program as
defined in this document, since they potentially introduce a data race[.](#intro.races-20.sentence-1)
However,
they are typically valid in the context of an optimizing compiler that targets a
specific machine with well-defined semantics for data races[.](#intro.races-20.sentence-2)
They would be
invalid for a hypothetical machine that is not tolerant of races or provides
hardware race detection[.](#intro.races-20.sentence-3)
— *end note*]
#### [6.10.2.3](#intro.progress) Forward progress [[intro.progress]](intro.progress)
[1](#intro.progress-1)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6913)
The implementation may assume that any thread will eventually do one of the
following:
- [(1.1)](#intro.progress-1.1)
terminate,
- [(1.2)](#intro.progress-1.2)
invoke the function std::this_thread::yield ([[thread.thread.this]](thread.thread.this "32.4.5Namespace this_­thread")),
- [(1.3)](#intro.progress-1.3)
make a call to a library I/O function,
- [(1.4)](#intro.progress-1.4)
perform an access through a volatile glvalue,
- [(1.5)](#intro.progress-1.5)
perform an atomic or synchronization operation
other than an atomic modify-write operation ([[atomics.order]](atomics.order "32.5.4Order and consistency")), or
- [(1.6)](#intro.progress-1.6)
continue execution of a trivial infinite loop ([[stmt.iter.general]](stmt.iter.general "8.6.1General"))[.](#intro.progress-1.sentence-1)
[*Note [1](#intro.progress-note-1)*:
This is intended to allow compiler transformations
such as removal, merging, and reordering of empty loops,
even when termination cannot be proven[.](#intro.progress-1.sentence-2)
An affordance is made for trivial infinite loops,
which cannot be removed nor reordered[.](#intro.progress-1.sentence-3)
— *end note*]
[2](#intro.progress-2)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6933)
Executions of atomic functions
that are either defined to be lock-free ([[atomics.flag]](atomics.flag "32.5.10Flag type and operations"))
or indicated as lock-free ([[atomics.lockfree]](atomics.lockfree "32.5.5Lock-free property"))
are [*lock-free executions*](#def:lock-free_execution "6.10.2.3Forward progress[intro.progress]")[.](#intro.progress-2.sentence-1)
- [(2.1)](#intro.progress-2.1)
If there is only one thread that is not blocked ([[defns.block]](defns.block "3.6block"))
in a standard library function,
a lock-free execution in that thread shall complete[.](#intro.progress-2.1.sentence-1)
[*Note [2](#intro.progress-note-2)*:
Concurrently executing threads
might prevent progress of a lock-free execution[.](#intro.progress-2.1.sentence-2)
For example,
this situation can occur
with load-locked store-conditional implementations[.](#intro.progress-2.1.sentence-3)
This property is sometimes termed obstruction-free[.](#intro.progress-2.1.sentence-4)
— *end note*]
- [(2.2)](#intro.progress-2.2)
When one or more lock-free executions run concurrently,
at least one should complete[.](#intro.progress-2.2.sentence-1)
[*Note [3](#intro.progress-note-3)*:
It is difficult for some implementations
to provide absolute guarantees to this effect,
since repeated and particularly inopportune interference
from other threads
could prevent forward progress,
e.g.,
by repeatedly stealing a cache line
for unrelated purposes
between load-locked and store-conditional instructions[.](#intro.progress-2.2.sentence-2)
For implementations that follow this recommendation and
ensure that such effects cannot indefinitely delay progress
under expected operating conditions,
such anomalies
can therefore safely be ignored by programmers[.](#intro.progress-2.2.sentence-3)
Outside this document,
this property is sometimes termed lock-free[.](#intro.progress-2.2.sentence-4)
— *end note*]
[3](#intro.progress-3)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6974)
During the execution of a thread of execution, each of the following is termed
an [*execution step*](#def:execution_step "6.10.2.3Forward progress[intro.progress]"):
- [(3.1)](#intro.progress-3.1)
termination of the thread of execution,
- [(3.2)](#intro.progress-3.2)
performing an access through a volatile glvalue,
- [(3.3)](#intro.progress-3.3)
completion of a call to a library I/O function, or
- [(3.4)](#intro.progress-3.4)
completion of an atomic or synchronization operation
other than an atomic modify-write operation ([[atomics.order]](atomics.order "32.5.4Order and consistency"))[.](#intro.progress-3.sentence-1)
[4](#intro.progress-4)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6985)
An invocation of a standard library function that blocks ([[defns.block]](defns.block "3.6block"))
is considered to continuously execute execution steps while waiting for the
condition that it blocks on to be satisfied[.](#intro.progress-4.sentence-1)
[*Example [1](#intro.progress-example-1)*:
A library I/O function that blocks until the I/O operation is complete can
be considered to continuously check whether the operation is complete[.](#intro.progress-4.sentence-2)
Each
such check consists of one or more execution steps, for example using
observable behavior of the abstract machine[.](#intro.progress-4.sentence-3)
— *end example*]
[5](#intro.progress-5)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L6996)
[*Note [4](#intro.progress-note-4)*:
Because of this and the preceding requirement regarding what threads of execution
have to perform eventually, it follows that no thread of execution can execute
forever without an execution step occurring[.](#intro.progress-5.sentence-1)
— *end note*]
[6](#intro.progress-6)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7003)
A thread of execution [*makes progress*](#def:make_progress,thread "6.10.2.3Forward progress[intro.progress]") when an execution step occurs or a
lock-free execution does not complete because there are other concurrent threads
that are not blocked in a standard library function (see above)[.](#intro.progress-6.sentence-1)
[7](#intro.progress-7)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7009)
For a thread of execution providing [*concurrent forward progress guarantees*](#def:concurrent_forward_progress_guarantees "6.10.2.3Forward progress[intro.progress]"),
the implementation ensures that the thread will eventually make progress for as
long as it has not terminated[.](#intro.progress-7.sentence-1)
[*Note [5](#intro.progress-note-5)*:
This applies regardless of whether or not other threads of execution (if any)
have been or are making progress[.](#intro.progress-7.sentence-2)
To eventually fulfill this requirement means that
this will happen in an unspecified but finite amount of time[.](#intro.progress-7.sentence-3)
— *end note*]
[8](#intro.progress-8)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7020)
It is implementation-defined whether the
implementation-created thread of execution that executesmain ([[basic.start.main]](basic.start.main "6.10.3.1main function")) and the threads of execution created bystd::thread ([[thread.thread.class]](thread.thread.class "32.4.3Class thread"))
or std::jthread ([[thread.jthread.class]](thread.jthread.class "32.4.4Class jthread"))
provide concurrent forward progress guarantees[.](#intro.progress-8.sentence-1)
General-purpose implementations should provide these guarantees[.](#intro.progress-8.sentence-2)
[9](#intro.progress-9)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7030)
For a thread of execution providing [*parallel forward progress guarantees*](#def:parallel_forward_progress_guarantees "6.10.2.3Forward progress[intro.progress]"),
the implementation is not required to ensure that the thread will eventually make
progress if it has not yet executed any execution step; once this thread has
executed a step, it provides concurrent forward progress guarantees[.](#intro.progress-9.sentence-1)
[10](#intro.progress-10)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7037)
[*Note [6](#intro.progress-note-6)*:
This does not specify a requirement for when to start this thread of execution,
which will typically be specified by the entity that creates this thread of
execution[.](#intro.progress-10.sentence-1)
For example, a thread of execution that provides concurrent forward
progress guarantees and executes tasks from a set of tasks in an arbitrary order,
one after the other, satisfies the requirements of parallel forward progress for
these tasks[.](#intro.progress-10.sentence-2)
— *end note*]
[11](#intro.progress-11)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7047)
For a thread of execution providing [*weakly parallel forward progress
guarantees*](#def:weakly_parallel_forward_progress_guarantees "6.10.2.3Forward progress[intro.progress]"), the implementation does not ensure that the thread will eventually
make progress[.](#intro.progress-11.sentence-1)
[12](#intro.progress-12)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7053)
[*Note [7](#intro.progress-note-7)*:
Threads of execution providing weakly parallel forward progress guarantees cannot
be expected to make progress regardless of whether other threads make progress or
not; however, blocking with forward progress guarantee delegation, as defined below,
can be used to ensure that such threads of execution make progress eventually[.](#intro.progress-12.sentence-1)
— *end note*]
[13](#intro.progress-13)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7061)
Concurrent forward progress guarantees are stronger than parallel forward progress
guarantees, which in turn are stronger than weakly parallel forward progress
guarantees[.](#intro.progress-13.sentence-1)
[*Note [8](#intro.progress-note-8)*:
For example, some kinds of synchronization between threads of execution might only
make progress if the respective threads of execution provide parallel forward progress
guarantees, but will fail to make progress under weakly parallel guarantees[.](#intro.progress-13.sentence-2)
— *end note*]
[14](#intro.progress-14)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7071)
When a thread of execution P is specified to[*block with forward progress guarantee delegation*](#def:block_(execution),with_forward_progress_guarantee_delegation "6.10.2.3Forward progress[intro.progress]") on the completion of a set S of threads of execution,
then throughout the whole time of P being blocked on S,
the implementation shall ensure that the forward progress guarantees
provided by at least one thread of execution in S is at least as strong as P's forward progress guarantees[.](#intro.progress-14.sentence-1)
[*Note [9](#intro.progress-note-9)*:
It is unspecified which thread or threads of execution in S are chosen
and for which number of execution steps[.](#intro.progress-14.sentence-2)
The strengthening is not permanent and
not necessarily in place for the rest of the lifetime of the affected thread of
execution[.](#intro.progress-14.sentence-3)
As long as P is blocked, the implementation has to eventually
select and potentially strengthen a thread of execution in S[.](#intro.progress-14.sentence-4)
— *end note*]
Once a thread of execution in S terminates, it is removed from S[.](#intro.progress-14.sentence-5)
Once S is empty, P is unblocked[.](#intro.progress-14.sentence-6)
[15](#intro.progress-15)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7091)
[*Note [10](#intro.progress-note-10)*:
A thread of execution B thus can temporarily provide an effectively
stronger forward progress guarantee for a certain amount of time, due to a
second thread of execution A being blocked on it with forward
progress guarantee delegation[.](#intro.progress-15.sentence-1)
In turn, if B then blocks with
forward progress guarantee delegation on C, this can also temporarily
provide a stronger forward progress guarantee to C[.](#intro.progress-15.sentence-2)
— *end note*]
[16](#intro.progress-16)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7101)
[*Note [11](#intro.progress-note-11)*:
If all threads of execution in S finish executing (e.g., they terminate
and do not use blocking synchronization incorrectly), then P's execution
of the operation that blocks with forward progress guarantee delegation will not
result in P's progress guarantee being effectively weakened[.](#intro.progress-16.sentence-1)
— *end note*]
[17](#intro.progress-17)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7109)
[*Note [12](#intro.progress-note-12)*:
This does not remove any constraints regarding blocking synchronization for
threads of execution providing parallel or weakly parallel forward progress
guarantees because the implementation is not required to strengthen a particular
thread of execution whose too-weak progress guarantee is preventing overall progress[.](#intro.progress-17.sentence-1)
— *end note*]
[18](#intro.progress-18)
[#](http://github.com/Eelis/draft/tree/9adde4bc1c62ec234483e63ea3b70a59724c745a/source/basic.tex#L7117)
An implementation should ensure that the last value (in modification order)
assigned by an atomic or synchronization operation will become visible to all
other threads in a finite period of time[.](#intro.progress-18.sentence-1)