blob: f48265b594ce991eacfea3cb1240dbfe9c712177 [file] [log] [blame]
Brian Silverman7e171022018-08-05 00:17:49 -07001[/
2 / Copyright (c) 2009 Helge Bahmann
3 / Copyright (c) 2014, 2017, 2018 Andrey Semashev
4 /
5 / Distributed under the Boost Software License, Version 1.0. (See accompanying
6 / file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
7 /]
8
9[library Boost.Atomic
10 [quickbook 1.4]
11 [authors [Bahmann, Helge][Semashev, Andrey]]
12 [copyright 2011 Helge Bahmann]
13 [copyright 2012 Tim Blechmann]
14 [copyright 2013, 2017, 2018 Andrey Semashev]
15 [id atomic]
16 [dirname atomic]
17 [purpose Atomic operations]
18 [license
19 Distributed under the Boost Software License, Version 1.0.
20 (See accompanying file LICENSE_1_0.txt or copy at
21 [@http://www.boost.org/LICENSE_1_0.txt])
22 ]
23]
24
25[section:introduction Introduction]
26
27[section:introduction_presenting Presenting Boost.Atomic]
28
29[*Boost.Atomic] is a library that provides [^atomic]
30data types and operations on these data types, as well as memory
31ordering constraints required for coordinating multiple threads through
32atomic variables. It implements the interface as defined by the C++11
33standard, but makes this feature available for platforms lacking
34system/compiler support for this particular C++11 feature.
35
36Users of this library should already be familiar with concurrency
37in general, as well as elementary concepts such as "mutual exclusion".
38
39The implementation makes use of processor-specific instructions where
40possible (via inline assembler, platform libraries or compiler
41intrinsics), and falls back to "emulating" atomic operations through
42locking.
43
44[endsect]
45
46[section:introduction_purpose Purpose]
47
48Operations on "ordinary" variables are not guaranteed to be atomic.
49This means that with [^int n=0] initially, two threads concurrently
50executing
51
52[c++]
53
54 void function()
55 {
56 n ++;
57 }
58
59might result in [^n==1] instead of 2: Each thread will read the
60old value into a processor register, increment it and write the result
61back. Both threads may therefore write [^1], unaware that the other thread
62is doing likewise.
63
64Declaring [^atomic<int> n=0] instead, the same operation on
65this variable will always result in [^n==2] as each operation on this
66variable is ['atomic]: This means that each operation behaves as if it
67were strictly sequentialized with respect to the other.
68
69Atomic variables are useful for two purposes:
70
71* as a means for coordinating multiple threads via custom
72 coordination protocols
73* as faster alternatives to "locked" access to simple variables
74
75Take a look at the [link atomic.usage_examples examples] section
76for common patterns.
77
78[endsect]
79
80[endsect]
81
82[section:thread_coordination Thread coordination using Boost.Atomic]
83
84The most common use of [*Boost.Atomic] is to realize custom
85thread synchronization protocols: The goal is to coordinate
86accesses of threads to shared variables in order to avoid
87"conflicts". The
88programmer must be aware of the fact that
89compilers, CPUs and the cache
90hierarchies may generally reorder memory references at will.
91As a consequence a program such as:
92
93[c++]
94
95 int x = 0, int y = 0;
96
97 thread1:
98 x = 1;
99 y = 1;
100
101 thread2
102 if (y == 1) {
103 assert(x == 1);
104 }
105
106might indeed fail as there is no guarantee that the read of `x`
107by thread2 "sees" the write by thread1.
108
109[*Boost.Atomic] uses a synchronisation concept based on the
110['happens-before] relation to describe the guarantees under
111which situations such as the above one cannot occur.
112
113The remainder of this section will discuss ['happens-before] in
114a "hands-on" way instead of giving a fully formalized definition.
115The reader is encouraged to additionally have a
116look at the discussion of the correctness of a few of the
117[link atomic.usage_examples examples] afterwards.
118
119[section:mutex Enforcing ['happens-before] through mutual exclusion]
120
121As an introductory example to understand how arguing using
122['happens-before] works, consider two threads synchronizing
123using a common mutex:
124
125[c++]
126
127 mutex m;
128
129 thread1:
130 m.lock();
131 ... /* A */
132 m.unlock();
133
134 thread2:
135 m.lock();
136 ... /* B */
137 m.unlock();
138
139The "lockset-based intuition" would be to argue that A and B
140cannot be executed concurrently as the code paths require a
141common lock to be held.
142
143One can however also arrive at the same conclusion using
144['happens-before]: Either thread1 or thread2 will succeed first
145at [^m.lock()]. If this is be thread1, then as a consequence,
146thread2 cannot succeed at [^m.lock()] before thread1 has executed
147[^m.unlock()], consequently A ['happens-before] B in this case.
148By symmetry, if thread2 succeeds at [^m.lock()] first, we can
149conclude B ['happens-before] A.
150
151Since this already exhausts all options, we can conclude that
152either A ['happens-before] B or B ['happens-before] A must
153always hold. Obviously cannot state ['which] of the two relationships
154holds, but either one is sufficient to conclude that A and B
155cannot conflict.
156
157Compare the [link boost_atomic.usage_examples.example_spinlock spinlock]
158implementation to see how the mutual exclusion concept can be
159mapped to [*Boost.Atomic].
160
161[endsect]
162
163[section:release_acquire ['happens-before] through [^release] and [^acquire]]
164
165The most basic pattern for coordinating threads via [*Boost.Atomic]
166uses [^release] and [^acquire] on an atomic variable for coordination: If ...
167
168* ... thread1 performs an operation A,
169* ... thread1 subsequently writes (or atomically
170 modifies) an atomic variable with [^release] semantic,
171* ... thread2 reads (or atomically reads-and-modifies)
172 the value this value from the same atomic variable with
173 [^acquire] semantic and
174* ... thread2 subsequently performs an operation B,
175
176... then A ['happens-before] B.
177
178Consider the following example
179
180[c++]
181
182 atomic<int> a(0);
183
184 thread1:
185 ... /* A */
186 a.fetch_add(1, memory_order_release);
187
188 thread2:
189 int tmp = a.load(memory_order_acquire);
190 if (tmp == 1) {
191 ... /* B */
192 } else {
193 ... /* C */
194 }
195
196In this example, two avenues for execution are possible:
197
198* The [^store] operation by thread1 precedes the [^load] by thread2:
199 In this case thread2 will execute B and "A ['happens-before] B"
200 holds as all of the criteria above are satisfied.
201* The [^load] operation by thread2 precedes the [^store] by thread1:
202 In this case, thread2 will execute C, but "A ['happens-before] C"
203 does ['not] hold: thread2 does not read the value written by
204 thread1 through [^a].
205
206Therefore, A and B cannot conflict, but A and C ['can] conflict.
207
208[endsect]
209
210[section:fences Fences]
211
212Ordering constraints are generally specified together with an access to
213an atomic variable. It is however also possible to issue "fence"
214operations in isolation, in this case the fence operates in
215conjunction with preceding (for `acquire`, `consume` or `seq_cst`
216operations) or succeeding (for `release` or `seq_cst`) atomic
217operations.
218
219The example from the previous section could also be written in
220the following way:
221
222[c++]
223
224 atomic<int> a(0);
225
226 thread1:
227 ... /* A */
228 atomic_thread_fence(memory_order_release);
229 a.fetch_add(1, memory_order_relaxed);
230
231 thread2:
232 int tmp = a.load(memory_order_relaxed);
233 if (tmp == 1) {
234 atomic_thread_fence(memory_order_acquire);
235 ... /* B */
236 } else {
237 ... /* C */
238 }
239
240This provides the same ordering guarantees as previously, but
241elides a (possibly expensive) memory ordering operation in
242the case C is executed.
243
244[endsect]
245
246[section:release_consume ['happens-before] through [^release] and [^consume]]
247
248The second pattern for coordinating threads via [*Boost.Atomic]
249uses [^release] and [^consume] on an atomic variable for coordination: If ...
250
251* ... thread1 performs an operation A,
252* ... thread1 subsequently writes (or atomically modifies) an
253 atomic variable with [^release] semantic,
254* ... thread2 reads (or atomically reads-and-modifies)
255 the value this value from the same atomic variable with [^consume] semantic and
256* ... thread2 subsequently performs an operation B that is ['computationally
257 dependent on the value of the atomic variable],
258
259... then A ['happens-before] B.
260
261Consider the following example
262
263[c++]
264
265 atomic<int> a(0);
266 complex_data_structure data[2];
267
268 thread1:
269 data[1] = ...; /* A */
270 a.store(1, memory_order_release);
271
272 thread2:
273 int index = a.load(memory_order_consume);
274 complex_data_structure tmp = data[index]; /* B */
275
276In this example, two avenues for execution are possible:
277
278* The [^store] operation by thread1 precedes the [^load] by thread2:
279 In this case thread2 will read [^data\[1\]] and "A ['happens-before] B"
280 holds as all of the criteria above are satisfied.
281* The [^load] operation by thread2 precedes the [^store] by thread1:
282 In this case thread2 will read [^data\[0\]] and "A ['happens-before] B"
283 does ['not] hold: thread2 does not read the value written by
284 thread1 through [^a].
285
286Here, the ['happens-before] relationship helps ensure that any
287accesses (presumable writes) to [^data\[1\]] by thread1 happen before
288before the accesses (presumably reads) to [^data\[1\]] by thread2:
289Lacking this relationship, thread2 might see stale/inconsistent
290data.
291
292Note that in this example, the fact that operation B is computationally
293dependent on the atomic variable, therefore the following program would
294be erroneous:
295
296[c++]
297
298 atomic<int> a(0);
299 complex_data_structure data[2];
300
301 thread1:
302 data[1] = ...; /* A */
303 a.store(1, memory_order_release);
304
305 thread2:
306 int index = a.load(memory_order_consume);
307 complex_data_structure tmp;
308 if (index == 0)
309 tmp = data[0];
310 else
311 tmp = data[1];
312
313[^consume] is most commonly (and most safely! see
314[link atomic.limitations limitations]) used with
315pointers, compare for example the
316[link boost_atomic.usage_examples.singleton singleton with double-checked locking].
317
318[endsect]
319
320[section:seq_cst Sequential consistency]
321
322The third pattern for coordinating threads via [*Boost.Atomic]
323uses [^seq_cst] for coordination: If ...
324
325* ... thread1 performs an operation A,
326* ... thread1 subsequently performs any operation with [^seq_cst],
327* ... thread1 subsequently performs an operation B,
328* ... thread2 performs an operation C,
329* ... thread2 subsequently performs any operation with [^seq_cst],
330* ... thread2 subsequently performs an operation D,
331
332then either "A ['happens-before] D" or "C ['happens-before] B" holds.
333
334In this case it does not matter whether thread1 and thread2 operate
335on the same or different atomic variables, or use a "stand-alone"
336[^atomic_thread_fence] operation.
337
338[endsect]
339
340[endsect]
341
342[section:interface Programming interfaces]
343
344[section:configuration Configuration and building]
345
346The library contains header-only and compiled parts. The library is
347header-only for lock-free cases but requires a separate binary to
348implement the lock-based emulation. Users are able to detect whether
349linking to the compiled part is required by checking the
350[link atomic.interface.feature_macros feature macros].
351
352The following macros affect library behavior:
353
354[table
355 [[Macro] [Description]]
356 [[`BOOST_ATOMIC_NO_CMPXCHG8B`] [Affects 32-bit x86 Oracle Studio builds. When defined,
357 the library assumes the target CPU does not support `cmpxchg8b` instruction used
358 to support 64-bit atomic operations. This is the case with very old CPUs (pre-Pentium).
359 The library does not perform runtime detection of this instruction, so running the code
360 that uses 64-bit atomics on such CPUs will result in crashes, unless this macro is defined.
361 Note that the macro does not affect MSVC, GCC and compatible compilers because the library infers
362 this information from the compiler-defined macros.]]
363 [[`BOOST_ATOMIC_NO_CMPXCHG16B`] [Affects 64-bit x86 MSVC and Oracle Studio builds. When defined,
364 the library assumes the target CPU does not support `cmpxchg16b` instruction used
365 to support 128-bit atomic operations. This is the case with some early 64-bit AMD CPUs,
366 all Intel CPUs and current AMD CPUs support this instruction. The library does not
367 perform runtime detection of this instruction, so running the code that uses 128-bit
368 atomics on such CPUs will result in crashes, unless this macro is defined. Note that
369 the macro does not affect GCC and compatible compilers because the library infers
370 this information from the compiler-defined macros.]]
371 [[`BOOST_ATOMIC_NO_MFENCE`] [Affects 32-bit x86 Oracle Studio builds. When defined,
372 the library assumes the target CPU does not support `mfence` instruction used
373 to implement thread fences. This instruction was added with SSE2 instruction set extension,
374 which was available in CPUs since Intel Pentium 4. The library does not perform runtime detection
375 of this instruction, so running the library code on older CPUs will result in crashes, unless
376 this macro is defined. Note that the macro does not affect MSVC, GCC and compatible compilers
377 because the library infers this information from the compiler-defined macros.]]
378 [[`BOOST_ATOMIC_NO_FLOATING_POINT`] [When defined, support for floating point operations is disabled.
379 Floating point types shall be treated similar to trivially copyable structs and no capability macros
380 will be defined.]]
381 [[`BOOST_ATOMIC_FORCE_FALLBACK`] [When defined, all operations are implemented with locks.
382 This is mostly used for testing and should not be used in real world projects.]]
383 [[`BOOST_ATOMIC_DYN_LINK` and `BOOST_ALL_DYN_LINK`] [Control library linking. If defined,
384 the library assumes dynamic linking, otherwise static. The latter macro affects all Boost
385 libraries, not just [*Boost.Atomic].]]
386 [[`BOOST_ATOMIC_NO_LIB` and `BOOST_ALL_NO_LIB`] [Control library auto-linking on Windows.
387 When defined, disables auto-linking. The latter macro affects all Boost libraries,
388 not just [*Boost.Atomic].]]
389]
390
391Besides macros, it is important to specify the correct compiler options for the target CPU.
392With GCC and compatible compilers this affects whether particular atomic operations are
393lock-free or not.
394
395Boost building process is described in the [@http://www.boost.org/doc/libs/release/more/getting_started/ Getting Started guide].
396For example, you can build [*Boost.Atomic] with the following command line:
397
398[pre
399 bjam --with-atomic variant=release instruction-set=core2 stage
400]
401
402[endsect]
403
404[section:interface_memory_order Memory order]
405
406 #include <boost/memory_order.hpp>
407
408The enumeration [^boost::memory_order] defines the following
409values to represent memory ordering constraints:
410
411[table
412 [[Constant] [Description]]
413 [[`memory_order_relaxed`] [No ordering constraint.
414 Informally speaking, following operations may be reordered before,
415 preceding operations may be reordered after the atomic
416 operation. This constraint is suitable only when
417 either a) further operations do not depend on the outcome
418 of the atomic operation or b) ordering is enforced through
419 stand-alone `atomic_thread_fence` operations. The operation on
420 the atomic value itself is still atomic though.
421 ]]
422 [[`memory_order_release`] [
423 Perform `release` operation. Informally speaking,
424 prevents all preceding memory operations to be reordered
425 past this point.
426 ]]
427 [[`memory_order_acquire`] [
428 Perform `acquire` operation. Informally speaking,
429 prevents succeeding memory operations to be reordered
430 before this point.
431 ]]
432 [[`memory_order_consume`] [
433 Perform `consume` operation. More relaxed (and
434 on some architectures more efficient) than `memory_order_acquire`
435 as it only affects succeeding operations that are
436 computationally-dependent on the value retrieved from
437 an atomic variable.
438 ]]
439 [[`memory_order_acq_rel`] [Perform both `release` and `acquire` operation]]
440 [[`memory_order_seq_cst`] [
441 Enforce sequential consistency. Implies `memory_order_acq_rel`, but
442 additionally enforces total order for all operations such qualified.
443 ]]
444]
445
446For compilers that support C++11 scoped enums, the library also defines scoped synonyms
447that are preferred in modern programs:
448
449[table
450 [[Pre-C++11 constant] [C++11 equivalent]]
451 [[`memory_order_relaxed`] [`memory_order::relaxed`]]
452 [[`memory_order_release`] [`memory_order::release`]]
453 [[`memory_order_acquire`] [`memory_order::acquire`]]
454 [[`memory_order_consume`] [`memory_order::consume`]]
455 [[`memory_order_acq_rel`] [`memory_order::acq_rel`]]
456 [[`memory_order_seq_cst`] [`memory_order::seq_cst`]]
457]
458
459See section [link atomic.thread_coordination ['happens-before]] for explanation
460of the various ordering constraints.
461
462[endsect]
463
464[section:interface_atomic_flag Atomic flags]
465
466 #include <boost/atomic/atomic_flag.hpp>
467
468The `boost::atomic_flag` type provides the most basic set of atomic operations
469suitable for implementing mutually exclusive access to thread-shared data. The flag
470can have one of the two possible states: set and clear. The class implements the
471following operations:
472
473[table
474 [[Syntax] [Description]]
475 [
476 [`atomic_flag()`]
477 [Initialize to the clear state. See the discussion below.]
478 ]
479 [
480 [`bool test_and_set(memory_order order)`]
481 [Sets the atomic flag to the set state; returns `true` if the flag had been set prior to the operation]
482 ]
483 [
484 [`void clear(memory_order order)`]
485 [Sets the atomic flag to the clear state]
486 ]
487]
488
489`order` always has `memory_order_seq_cst` as default parameter.
490
491Note that the default constructor `atomic_flag()` is unlike `std::atomic_flag`, which
492leaves the default-constructed object uninitialized. This potentially requires dynamic
493initialization during the program startup to perform the object initialization, which
494makes it unsafe to create global `boost::atomic_flag` objects that can be used before
495entring `main()`. Some compilers though (especially those supporting C++11 `constexpr`)
496may be smart enough to perform flag initialization statically (which is, in C++11 terms,
497a constant initialization).
498
499This difference is deliberate and is done to support C++03 compilers. C++11 defines the
500`ATOMIC_FLAG_INIT` macro which can be used to statically initialize `std::atomic_flag`
501to a clear state like this:
502
503 std::atomic_flag flag = ATOMIC_FLAG_INIT; // constant initialization
504
505This macro cannot be implemented in C++03 because for that `atomic_flag` would have to be
506an aggregate type, which it cannot be because it has to prohibit copying and consequently
507define the default constructor. Thus the closest equivalent C++03 code using [*Boost.Atomic]
508would be:
509
510 boost::atomic_flag flag; // possibly, dynamic initialization in C++03;
511 // constant initialization in C++11
512
513The same code is also valid in C++11, so this code can be used universally. However, for
514interface parity with `std::atomic_flag`, if possible, the library also defines the
515`BOOST_ATOMIC_FLAG_INIT` macro, which is equivalent to `ATOMIC_FLAG_INIT`:
516
517 boost::atomic_flag flag = BOOST_ATOMIC_FLAG_INIT; // constant initialization
518
519This macro will only be implemented on a C++11 compiler. When this macro is not available,
520the library defines `BOOST_ATOMIC_NO_ATOMIC_FLAG_INIT`.
521
522[endsect]
523
524[section:interface_atomic_object Atomic objects]
525
526 #include <boost/atomic/atomic.hpp>
527
528[^boost::atomic<['T]>] provides methods for atomically accessing
529variables of a suitable type [^['T]]. The type is suitable if
530it is /trivially copyable/ (3.9/9 \[basic.types\]). Following are
531examples of the types compatible with this requirement:
532
533* a scalar type (e.g. integer, boolean, enum or pointer type)
534* a [^class] or [^struct] that has no non-trivial copy or move
535 constructors or assignment operators, has a trivial destructor,
536 and that is comparable via [^memcmp].
537
538Note that classes with virtual functions or virtual base classes
539do not satisfy the requirements. Also be warned
540that structures with "padding" between data members may compare
541non-equal via [^memcmp] even though all members are equal. This may also be
542the case with some floating point types, which include padding bits themselves.
543
544[section:interface_atomic_generic [^boost::atomic<['T]>] template class]
545
546All atomic objects support the following operations and properties:
547
548[table
549 [[Syntax] [Description]]
550 [
551 [`atomic()`]
552 [Initialize to an unspecified value]
553 ]
554 [
555 [`atomic(T initial_value)`]
556 [Initialize to [^initial_value]]
557 ]
558 [
559 [`bool is_lock_free()`]
560 [Checks if the atomic object is lock-free; the returned value is consistent with the `is_always_lock_free` static constant, see below]
561 ]
562 [
563 [`T load(memory_order order)`]
564 [Return current value]
565 ]
566 [
567 [`void store(T value, memory_order order)`]
568 [Write new value to atomic variable]
569 ]
570 [
571 [`T exchange(T new_value, memory_order order)`]
572 [Exchange current value with `new_value`, returning current value]
573 ]
574 [
575 [`bool compare_exchange_weak(T & expected, T desired, memory_order order)`]
576 [Compare current value with `expected`, change it to `desired` if matches.
577 Returns `true` if an exchange has been performed, and always writes the
578 previous value back in `expected`. May fail spuriously, so must generally be
579 retried in a loop.]
580 ]
581 [
582 [`bool compare_exchange_weak(T & expected, T desired, memory_order success_order, memory_order failure_order)`]
583 [Compare current value with `expected`, change it to `desired` if matches.
584 Returns `true` if an exchange has been performed, and always writes the
585 previous value back in `expected`. May fail spuriously, so must generally be
586 retried in a loop.]
587 ]
588 [
589 [`bool compare_exchange_strong(T & expected, T desired, memory_order order)`]
590 [Compare current value with `expected`, change it to `desired` if matches.
591 Returns `true` if an exchange has been performed, and always writes the
592 previous value back in `expected`.]
593 ]
594 [
595 [`bool compare_exchange_strong(T & expected, T desired, memory_order success_order, memory_order failure_order))`]
596 [Compare current value with `expected`, change it to `desired` if matches.
597 Returns `true` if an exchange has been performed, and always writes the
598 previous value back in `expected`.]
599 ]
600 [
601 [`static bool is_always_lock_free`]
602 [This static boolean constant indicates if any atomic object of this type is lock-free]
603 ]
604]
605
606`order` always has `memory_order_seq_cst` as default parameter.
607
608The `compare_exchange_weak`/`compare_exchange_strong` variants
609taking four parameters differ from the three parameter variants
610in that they allow a different memory ordering constraint to
611be specified in case the operation fails.
612
613In addition to these explicit operations, each
614[^atomic<['T]>] object also supports
615implicit [^store] and [^load] through the use of "assignment"
616and "conversion to [^T]" operators. Avoid using these operators,
617as they do not allow to specify a memory ordering
618constraint which always defaults to `memory_order_seq_cst`.
619
620[endsect]
621
622[section:interface_atomic_integral [^boost::atomic<['integral]>] template class]
623
624In addition to the operations listed in the previous section,
625[^boost::atomic<['I]>] for integral
626types [^['I]], except `bool`, supports the following operations,
627which correspond to [^std::atomic<['I]>]:
628
629[table
630 [[Syntax] [Description]]
631 [
632 [`I fetch_add(I v, memory_order order)`]
633 [Add `v` to variable, returning previous value]
634 ]
635 [
636 [`I fetch_sub(I v, memory_order order)`]
637 [Subtract `v` from variable, returning previous value]
638 ]
639 [
640 [`I fetch_and(I v, memory_order order)`]
641 [Apply bit-wise "and" with `v` to variable, returning previous value]
642 ]
643 [
644 [`I fetch_or(I v, memory_order order)`]
645 [Apply bit-wise "or" with `v` to variable, returning previous value]
646 ]
647 [
648 [`I fetch_xor(I v, memory_order order)`]
649 [Apply bit-wise "xor" with `v` to variable, returning previous value]
650 ]
651]
652
653Additionally, as a [*Boost.Atomic] extension, the following operations are also provided:
654
655[table
656 [[Syntax] [Description]]
657 [
658 [`I fetch_negate(memory_order order)`]
659 [Change the sign of the value stored in the variable, returning previous value]
660 ]
661 [
662 [`I fetch_complement(memory_order order)`]
663 [Set the variable to the one\'s complement of the current value, returning previous value]
664 ]
665 [
666 [`I negate(memory_order order)`]
667 [Change the sign of the value stored in the variable, returning the result]
668 ]
669 [
670 [`I add(I v, memory_order order)`]
671 [Add `v` to variable, returning the result]
672 ]
673 [
674 [`I sub(I v, memory_order order)`]
675 [Subtract `v` from variable, returning the result]
676 ]
677 [
678 [`I bitwise_and(I v, memory_order order)`]
679 [Apply bit-wise "and" with `v` to variable, returning the result]
680 ]
681 [
682 [`I bitwise_or(I v, memory_order order)`]
683 [Apply bit-wise "or" with `v` to variable, returning the result]
684 ]
685 [
686 [`I bitwise_xor(I v, memory_order order)`]
687 [Apply bit-wise "xor" with `v` to variable, returning the result]
688 ]
689 [
690 [`I bitwise_complement(memory_order order)`]
691 [Set the variable to the one\'s complement of the current value, returning the result]
692 ]
693 [
694 [`void opaque_negate(memory_order order)`]
695 [Change the sign of the value stored in the variable, returning nothing]
696 ]
697 [
698 [`void opaque_add(I v, memory_order order)`]
699 [Add `v` to variable, returning nothing]
700 ]
701 [
702 [`void opaque_sub(I v, memory_order order)`]
703 [Subtract `v` from variable, returning nothing]
704 ]
705 [
706 [`void opaque_and(I v, memory_order order)`]
707 [Apply bit-wise "and" with `v` to variable, returning nothing]
708 ]
709 [
710 [`void opaque_or(I v, memory_order order)`]
711 [Apply bit-wise "or" with `v` to variable, returning nothing]
712 ]
713 [
714 [`void opaque_xor(I v, memory_order order)`]
715 [Apply bit-wise "xor" with `v` to variable, returning nothing]
716 ]
717 [
718 [`void opaque_complement(memory_order order)`]
719 [Set the variable to the one\'s complement of the current value, returning nothing]
720 ]
721 [
722 [`bool negate_and_test(memory_order order)`]
723 [Change the sign of the value stored in the variable, returning `true` if the result is non-zero and `false` otherwise]
724 ]
725 [
726 [`bool add_and_test(I v, memory_order order)`]
727 [Add `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
728 ]
729 [
730 [`bool sub_and_test(I v, memory_order order)`]
731 [Subtract `v` from variable, returning `true` if the result is non-zero and `false` otherwise]
732 ]
733 [
734 [`bool and_and_test(I v, memory_order order)`]
735 [Apply bit-wise "and" with `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
736 ]
737 [
738 [`bool or_and_test(I v, memory_order order)`]
739 [Apply bit-wise "or" with `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
740 ]
741 [
742 [`bool xor_and_test(I v, memory_order order)`]
743 [Apply bit-wise "xor" with `v` to variable, returning `true` if the result is non-zero and `false` otherwise]
744 ]
745 [
746 [`bool complement_and_test(memory_order order)`]
747 [Set the variable to the one\'s complement of the current value, returning `true` if the result is non-zero and `false` otherwise]
748 ]
749 [
750 [`bool bit_test_and_set(unsigned int n, memory_order order)`]
751 [Set bit number `n` in the variable to 1, returning `true` if the bit was previously set to 1 and `false` otherwise]
752 ]
753 [
754 [`bool bit_test_and_reset(unsigned int n, memory_order order)`]
755 [Set bit number `n` in the variable to 0, returning `true` if the bit was previously set to 1 and `false` otherwise]
756 ]
757 [
758 [`bool bit_test_and_complement(unsigned int n, memory_order order)`]
759 [Change bit number `n` in the variable to the opposite value, returning `true` if the bit was previously set to 1 and `false` otherwise]
760 ]
761]
762
763[note In Boost.Atomic 1.66 the [^['op]_and_test] operations returned the opposite value (i.e. `true` if the result is zero). This was changed
764to the current behavior in 1.67 for consistency with other operations in Boost.Atomic, as well as with conventions taken in the C++ standard library.
765Boost.Atomic 1.66 was the only release shipped with the old behavior. Users upgrading from Boost 1.66 to a later release can define
766`BOOST_ATOMIC_HIGHLIGHT_OP_AND_TEST` macro when building their code to generate deprecation warnings on the [^['op]_and_test] function calls
767(the functions are not actually deprecated though; this is just a way to highlight their use).]
768
769`order` always has `memory_order_seq_cst` as default parameter.
770
771The [^opaque_['op]] and [^['op]_and_test] variants of the operations
772may result in a more efficient code on some architectures because
773the original value of the atomic variable is not preserved. In the
774[^bit_test_and_['op]] operations, the bit number `n` starts from 0, which
775means the least significand bit, and must not exceed
776[^std::numeric_limits<['I]>::digits - 1].
777
778In addition to these explicit operations, each
779[^boost::atomic<['I]>] object also
780supports implicit pre-/post- increment/decrement, as well
781as the operators `+=`, `-=`, `&=`, `|=` and `^=`.
782Avoid using these operators, as they do not allow to specify a memory ordering
783constraint which always defaults to `memory_order_seq_cst`.
784
785[endsect]
786
787[section:interface_atomic_floating_point [^boost::atomic<['floating-point]>] template class]
788
789[note The support for floating point types is optional and can be disabled by defining `BOOST_ATOMIC_NO_FLOATING_POINT`.]
790
791In addition to the operations applicable to all atomic objects,
792[^boost::atomic<['F]>] for floating point
793types [^['F]] supports the following operations,
794which correspond to [^std::atomic<['F]>]:
795
796[table
797 [[Syntax] [Description]]
798 [
799 [`F fetch_add(F v, memory_order order)`]
800 [Add `v` to variable, returning previous value]
801 ]
802 [
803 [`F fetch_sub(F v, memory_order order)`]
804 [Subtract `v` from variable, returning previous value]
805 ]
806]
807
808Additionally, as a [*Boost.Atomic] extension, the following operations are also provided:
809
810[table
811 [[Syntax] [Description]]
812 [
813 [`F fetch_negate(memory_order order)`]
814 [Change the sign of the value stored in the variable, returning previous value]
815 ]
816 [
817 [`F negate(memory_order order)`]
818 [Change the sign of the value stored in the variable, returning the result]
819 ]
820 [
821 [`F add(F v, memory_order order)`]
822 [Add `v` to variable, returning the result]
823 ]
824 [
825 [`F sub(F v, memory_order order)`]
826 [Subtract `v` from variable, returning the result]
827 ]
828 [
829 [`void opaque_negate(memory_order order)`]
830 [Change the sign of the value stored in the variable, returning nothing]
831 ]
832 [
833 [`void opaque_add(F v, memory_order order)`]
834 [Add `v` to variable, returning nothing]
835 ]
836 [
837 [`void opaque_sub(F v, memory_order order)`]
838 [Subtract `v` from variable, returning nothing]
839 ]
840]
841
842`order` always has `memory_order_seq_cst` as default parameter.
843
844The [^opaque_['op]] variants of the operations
845may result in a more efficient code on some architectures because
846the original value of the atomic variable is not preserved.
847
848In addition to these explicit operations, each
849[^boost::atomic<['F]>] object also supports operators `+=` and `-=`.
850Avoid using these operators, as they do not allow to specify a memory ordering
851constraint which always defaults to `memory_order_seq_cst`.
852
853When using atomic operations with floating point types, bear in mind that [*Boost.Atomic]
854always performs bitwise comparison of the stored values. This means that operations like
855`compare_exchange*` may fail if the stored value and comparand have different binary representation,
856even if they would normally compare equal. This is typically the case when either of the numbers
857is [@https://en.wikipedia.org/wiki/Denormal_number denormalized]. This also means that the behavior
858with regard to special floating point values like NaN and signed zero is also different from normal C++.
859
860Another source of the problem is padding bits that are added to some floating point types for alignment.
861One widespread example of that is Intel x87 extended double format, which is typically stored as 80 bits
862of value padded with 16 or 48 unused bits. These padding bits are often uninitialized and contain garbage,
863which makes two equal numbers have different binary representation. The library attempts to account for
864the known such cases, but in general it is possible that some platforms are not covered. Note that the C++
865standard makes no guarantees about reliability of `compare_exchange*` operations in the face of padding or
866trap bits.
867
868[endsect]
869
870[section:interface_atomic_pointer [^boost::atomic<['pointer]>] template class]
871
872In addition to the operations applicable to all atomic objects,
873[^boost::atomic<['P]>] for pointer
874types [^['P]] (other than pointers to [^void], function or member pointers) support
875the following operations, which correspond to [^std::atomic<['P]>]:
876
877[table
878 [[Syntax] [Description]]
879 [
880 [`T fetch_add(ptrdiff_t v, memory_order order)`]
881 [Add `v` to variable, returning previous value]
882 ]
883 [
884 [`T fetch_sub(ptrdiff_t v, memory_order order)`]
885 [Subtract `v` from variable, returning previous value]
886 ]
887]
888
889Similarly to integers, the following [*Boost.Atomic] extensions are also provided:
890
891[table
892 [[Syntax] [Description]]
893 [
894 [`void add(ptrdiff_t v, memory_order order)`]
895 [Add `v` to variable, returning the result]
896 ]
897 [
898 [`void sub(ptrdiff_t v, memory_order order)`]
899 [Subtract `v` from variable, returning the result]
900 ]
901 [
902 [`void opaque_add(ptrdiff_t v, memory_order order)`]
903 [Add `v` to variable, returning nothing]
904 ]
905 [
906 [`void opaque_sub(ptrdiff_t v, memory_order order)`]
907 [Subtract `v` from variable, returning nothing]
908 ]
909 [
910 [`bool add_and_test(ptrdiff_t v, memory_order order)`]
911 [Add `v` to variable, returning `true` if the result is non-null and `false` otherwise]
912 ]
913 [
914 [`bool sub_and_test(ptrdiff_t v, memory_order order)`]
915 [Subtract `v` from variable, returning `true` if the result is non-null and `false` otherwise]
916 ]
917]
918
919`order` always has `memory_order_seq_cst` as default parameter.
920
921In addition to these explicit operations, each
922[^boost::atomic<['P]>] object also
923supports implicit pre-/post- increment/decrement, as well
924as the operators `+=`, `-=`. Avoid using these operators,
925as they do not allow explicit specification of a memory ordering
926constraint which always defaults to `memory_order_seq_cst`.
927
928[endsect]
929
930[section:interface_atomic_convenience_typedefs [^boost::atomic<['T]>] convenience typedefs]
931
932For convenience, several shorthand typedefs of [^boost::atomic<['T]>] are provided:
933
934[c++]
935
936 typedef atomic< char > atomic_char;
937 typedef atomic< unsigned char > atomic_uchar;
938 typedef atomic< signed char > atomic_schar;
939 typedef atomic< unsigned short > atomic_ushort;
940 typedef atomic< short > atomic_short;
941 typedef atomic< unsigned int > atomic_uint;
942 typedef atomic< int > atomic_int;
943 typedef atomic< unsigned long > atomic_ulong;
944 typedef atomic< long > atomic_long;
945 typedef atomic< unsigned long long > atomic_ullong;
946 typedef atomic< long long > atomic_llong;
947
948 typedef atomic< void* > atomic_address;
949 typedef atomic< bool > atomic_bool;
950 typedef atomic< wchar_t > atomic_wchar_t;
951 typedef atomic< char16_t > atomic_char16_t;
952 typedef atomic< char32_t > atomic_char32_t;
953
954 typedef atomic< uint8_t > atomic_uint8_t;
955 typedef atomic< int8_t > atomic_int8_t;
956 typedef atomic< uint16_t > atomic_uint16_t;
957 typedef atomic< int16_t > atomic_int16_t;
958 typedef atomic< uint32_t > atomic_uint32_t;
959 typedef atomic< int32_t > atomic_int32_t;
960 typedef atomic< uint64_t > atomic_uint64_t;
961 typedef atomic< int64_t > atomic_int64_t;
962
963 typedef atomic< int_least8_t > atomic_int_least8_t;
964 typedef atomic< uint_least8_t > atomic_uint_least8_t;
965 typedef atomic< int_least16_t > atomic_int_least16_t;
966 typedef atomic< uint_least16_t > atomic_uint_least16_t;
967 typedef atomic< int_least32_t > atomic_int_least32_t;
968 typedef atomic< uint_least32_t > atomic_uint_least32_t;
969 typedef atomic< int_least64_t > atomic_int_least64_t;
970 typedef atomic< uint_least64_t > atomic_uint_least64_t;
971 typedef atomic< int_fast8_t > atomic_int_fast8_t;
972 typedef atomic< uint_fast8_t > atomic_uint_fast8_t;
973 typedef atomic< int_fast16_t > atomic_int_fast16_t;
974 typedef atomic< uint_fast16_t > atomic_uint_fast16_t;
975 typedef atomic< int_fast32_t > atomic_int_fast32_t;
976 typedef atomic< uint_fast32_t > atomic_uint_fast32_t;
977 typedef atomic< int_fast64_t > atomic_int_fast64_t;
978 typedef atomic< uint_fast64_t > atomic_uint_fast64_t;
979 typedef atomic< intmax_t > atomic_intmax_t;
980 typedef atomic< uintmax_t > atomic_uintmax_t;
981
982 typedef atomic< std::size_t > atomic_size_t;
983 typedef atomic< std::ptrdiff_t > atomic_ptrdiff_t;
984
985 typedef atomic< intptr_t > atomic_intptr_t;
986 typedef atomic< uintptr_t > atomic_uintptr_t;
987
988The typedefs are provided only if the corresponding type is available.
989
990[endsect]
991
992[endsect]
993
994[section:interface_fences Fences]
995
996 #include <boost/atomic/fences.hpp>
997
998[table
999 [[Syntax] [Description]]
1000 [
1001 [`void atomic_thread_fence(memory_order order)`]
1002 [Issue fence for coordination with other threads.]
1003 ]
1004 [
1005 [`void atomic_signal_fence(memory_order order)`]
1006 [Issue fence for coordination with signal handler (only in same thread).]
1007 ]
1008]
1009
1010[endsect]
1011
1012[section:feature_macros Feature testing macros]
1013
1014 #include <boost/atomic/capabilities.hpp>
1015
1016[*Boost.Atomic] defines a number of macros to allow compile-time
1017detection whether an atomic data type is implemented using
1018"true" atomic operations, or whether an internal "lock" is
1019used to provide atomicity. The following macros will be
1020defined to `0` if operations on the data type always
1021require a lock, to `1` if operations on the data type may
1022sometimes require a lock, and to `2` if they are always lock-free:
1023
1024[table
1025 [[Macro] [Description]]
1026 [
1027 [`BOOST_ATOMIC_FLAG_LOCK_FREE`]
1028 [Indicate whether `atomic_flag` is lock-free]
1029 ]
1030 [
1031 [`BOOST_ATOMIC_BOOL_LOCK_FREE`]
1032 [Indicate whether `atomic<bool>` is lock-free]
1033 ]
1034 [
1035 [`BOOST_ATOMIC_CHAR_LOCK_FREE`]
1036 [Indicate whether `atomic<char>` (including signed/unsigned variants) is lock-free]
1037 ]
1038 [
1039 [`BOOST_ATOMIC_CHAR16_T_LOCK_FREE`]
1040 [Indicate whether `atomic<char16_t>` (including signed/unsigned variants) is lock-free]
1041 ]
1042 [
1043 [`BOOST_ATOMIC_CHAR32_T_LOCK_FREE`]
1044 [Indicate whether `atomic<char32_t>` (including signed/unsigned variants) is lock-free]
1045 ]
1046 [
1047 [`BOOST_ATOMIC_WCHAR_T_LOCK_FREE`]
1048 [Indicate whether `atomic<wchar_t>` (including signed/unsigned variants) is lock-free]
1049 ]
1050 [
1051 [`BOOST_ATOMIC_SHORT_LOCK_FREE`]
1052 [Indicate whether `atomic<short>` (including signed/unsigned variants) is lock-free]
1053 ]
1054 [
1055 [`BOOST_ATOMIC_INT_LOCK_FREE`]
1056 [Indicate whether `atomic<int>` (including signed/unsigned variants) is lock-free]
1057 ]
1058 [
1059 [`BOOST_ATOMIC_LONG_LOCK_FREE`]
1060 [Indicate whether `atomic<long>` (including signed/unsigned variants) is lock-free]
1061 ]
1062 [
1063 [`BOOST_ATOMIC_LLONG_LOCK_FREE`]
1064 [Indicate whether `atomic<long long>` (including signed/unsigned variants) is lock-free]
1065 ]
1066 [
1067 [`BOOST_ATOMIC_ADDRESS_LOCK_FREE` or `BOOST_ATOMIC_POINTER_LOCK_FREE`]
1068 [Indicate whether `atomic<T *>` is lock-free]
1069 ]
1070 [
1071 [`BOOST_ATOMIC_THREAD_FENCE`]
1072 [Indicate whether `atomic_thread_fence` function is lock-free]
1073 ]
1074 [
1075 [`BOOST_ATOMIC_SIGNAL_FENCE`]
1076 [Indicate whether `atomic_signal_fence` function is lock-free]
1077 ]
1078]
1079
1080In addition to these standard macros, [*Boost.Atomic] also defines a number of extension macros,
1081which can also be useful. Like the standard ones, these macros are defined to values `0`, `1` and `2`
1082to indicate whether the corresponding operations are lock-free or not.
1083
1084[table
1085 [[Macro] [Description]]
1086 [
1087 [`BOOST_ATOMIC_INT8_LOCK_FREE`]
1088 [Indicate whether `atomic<int8_type>` is lock-free.]
1089 ]
1090 [
1091 [`BOOST_ATOMIC_INT16_LOCK_FREE`]
1092 [Indicate whether `atomic<int16_type>` is lock-free.]
1093 ]
1094 [
1095 [`BOOST_ATOMIC_INT32_LOCK_FREE`]
1096 [Indicate whether `atomic<int32_type>` is lock-free.]
1097 ]
1098 [
1099 [`BOOST_ATOMIC_INT64_LOCK_FREE`]
1100 [Indicate whether `atomic<int64_type>` is lock-free.]
1101 ]
1102 [
1103 [`BOOST_ATOMIC_INT128_LOCK_FREE`]
1104 [Indicate whether `atomic<int128_type>` is lock-free.]
1105 ]
1106 [
1107 [`BOOST_ATOMIC_NO_ATOMIC_FLAG_INIT`]
1108 [Defined after including `atomic_flag.hpp`, if the implementation
1109 does not support the `BOOST_ATOMIC_FLAG_INIT` macro for static
1110 initialization of `atomic_flag`. This macro is typically defined
1111 for pre-C++11 compilers.]
1112 ]
1113]
1114
1115In the table above, `intN_type` is a type that fits storage of contiguous `N` bits, suitably aligned for atomic operations.
1116
1117For floating-point types the following macros are similarly defined:
1118
1119[table
1120 [[Macro] [Description]]
1121 [
1122 [`BOOST_ATOMIC_FLOAT_LOCK_FREE`]
1123 [Indicate whether `atomic<float>` is lock-free.]
1124 ]
1125 [
1126 [`BOOST_ATOMIC_DOUBLE_LOCK_FREE`]
1127 [Indicate whether `atomic<double>` is lock-free.]
1128 ]
1129 [
1130 [`BOOST_ATOMIC_LONG_DOUBLE_LOCK_FREE`]
1131 [Indicate whether `atomic<long double>` is lock-free.]
1132 ]
1133]
1134
1135These macros are not defined when support for floating point types is disabled by user.
1136
1137[endsect]
1138
1139[endsect]
1140
1141[section:usage_examples Usage examples]
1142
1143[include examples.qbk]
1144
1145[endsect]
1146
1147[/
1148[section:platform_support Implementing support for additional platforms]
1149
1150[include platform.qbk]
1151
1152[endsect]
1153]
1154
1155[/ [xinclude autodoc.xml] ]
1156
1157[section:limitations Limitations]
1158
1159While [*Boost.Atomic] strives to implement the atomic operations
1160from C++11 and later as faithfully as possible, there are a few
1161limitations that cannot be lifted without compiler support:
1162
1163* [*Aggregate initialization syntax is not supported]: Since [*Boost.Atomic]
1164 sometimes uses storage type that is different from the value type,
1165 the `atomic<>` template needs an initialization constructor that
1166 performs the necessary conversion. This makes `atomic<>` a non-aggregate
1167 type and prohibits aggregate initialization syntax (`atomic<int> a = {10}`).
1168 [*Boost.Atomic] does support direct and unified initialization syntax though.
1169 [*Advice]: Always use direct initialization (`atomic<int> a(10)`) or unified
1170 initialization (`atomic<int> a{10}`) syntax.
1171* [*Initializing constructor is not `constexpr` for some types]: For value types
1172 other than integral types and `bool`, `atomic<>` initializing constructor needs
1173 to perform runtime conversion to the storage type. This limitation may be
1174 lifted for more categories of types in the future.
1175* [*Default constructor is not trivial in C++03]: Because the initializing
1176 constructor has to be defined in `atomic<>`, the default constructor
1177 must also be defined. In C++03 the constructor cannot be defined as defaulted
1178 and therefore it is not trivial. In C++11 the constructor is defaulted (and trivial,
1179 if the default constructor of the value type is). In any case, the default
1180 constructor of `atomic<>` performs default initialization of the atomic value,
1181 as required in C++11. [*Advice]: In C++03, do not use [*Boost.Atomic] in contexts
1182 where trivial default constructor is important (e.g. as a global variable which
1183 is required to be statically initialized).
1184* [*C++03 compilers may transform computation dependency to control dependency]:
1185 Crucially, `memory_order_consume` only affects computationally-dependent
1186 operations, but in general there is nothing preventing a compiler
1187 from transforming a computation dependency into a control dependency.
1188 A fully compliant C++11 compiler would be forbidden from such a transformation,
1189 but in practice most if not all compilers have chosen to promote
1190 `memory_order_consume` to `memory_order_acquire` instead
1191 (see [@https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59448 this] gcc bug
1192 for example). In the current implementation [*Boost.Atomic] follows that trend,
1193 but this may change in the future.
1194 [*Advice]: In general, avoid `memory_order_consume` and use `memory_order_acquire`
1195 instead. Use `memory_order_consume` only in conjunction with
1196 pointer values, and only if you can ensure that the compiler cannot
1197 speculate and transform these into control dependencies.
1198* [*Fence operations may enforce "too strong" compiler ordering]:
1199 Semantically, `memory_order_acquire`/`memory_order_consume`
1200 and `memory_order_release` need to restrain reordering of
1201 memory operations only in one direction. Since in C++03 there is no
1202 way to express this constraint to the compiler, these act
1203 as "full compiler barriers" in C++03 implementation. In corner
1204 cases this may result in a slightly less efficient code than a C++11 compiler
1205 could generate. [*Boost.Atomic] will use compiler intrinsics, if possible,
1206 to express the proper ordering constraints.
1207* [*Atomic operations may enforce "too strong" memory ordering in debug mode]:
1208 On some compilers, disabling optimizations makes it impossible to provide
1209 memory ordering constraints as compile-time constants to the compiler intrinsics.
1210 This causes the compiler to silently ignore the provided constraints and choose
1211 the "strongest" memory order (`memory_order_seq_cst`) to generate code. Not only
1212 this reduces performance, this may hide bugs in the user's code (e.g. if the user
1213 used a wrong memory order constraint, which caused a data race).
1214 [*Advice]: Always test your code with optimizations enabled.
1215* [*No interprocess fallback]: using `atomic<T>` in shared memory only works
1216 correctly, if `atomic<T>::is_lock_free() == true`.
1217* [*Signed integers must use [@https://en.wikipedia.org/wiki/Two%27s_complement two's complement]
1218 representation]: [*Boost.Atomic] makes this requirement in order to implement
1219 conversions between signed and unsigned integers internally. C++11 requires all
1220 atomic arithmetic operations on integers to be well defined according to two's complement
1221 arithmetics, which means that Boost.Atomic has to operate on unsigned integers internally
1222 to avoid undefined behavior that results from signed integer overflows. Platforms
1223 with other signed integer representations are not supported.
1224
1225[endsect]
1226
1227[section:porting Porting]
1228
1229[section:unit_tests Unit tests]
1230
1231[*Boost.Atomic] provides a unit test suite to verify that the
1232implementation behaves as expected:
1233
1234* [*fallback_api.cpp] verifies that the fallback-to-locking aspect
1235 of [*Boost.Atomic] compiles and has correct value semantics.
1236* [*native_api.cpp] verifies that all atomic operations have correct
1237 value semantics (e.g. "fetch_add" really adds the desired value,
1238 returning the previous). It is a rough "smoke-test" to help weed
1239 out the most obvious mistakes (for example width overflow,
1240 signed/unsigned extension, ...).
1241* [*lockfree.cpp] verifies that the [*BOOST_ATOMIC_*_LOCKFREE] macros
1242 are set properly according to the expectations for a given
1243 platform, and that they match up with the [*is_always_lock_free] and
1244 [*is_lock_free] members of the [*atomic] object instances.
1245* [*atomicity.cpp] lets two threads race against each other modifying
1246 a shared variable, verifying that the operations behave atomic
1247 as appropriate. By nature, this test is necessarily stochastic, and
1248 the test self-calibrates to yield 99% confidence that a
1249 positive result indicates absence of an error. This test is
1250 very useful on uni-processor systems with preemption already.
1251* [*ordering.cpp] lets two threads race against each other accessing
1252 multiple shared variables, verifying that the operations
1253 exhibit the expected ordering behavior. By nature, this test is
1254 necessarily stochastic, and the test attempts to self-calibrate to
1255 yield 99% confidence that a positive result indicates absence
1256 of an error. This only works on true multi-processor (or multi-core)
1257 systems. It does not yield any result on uni-processor systems
1258 or emulators (due to there being no observable reordering even
1259 the order=relaxed case) and will report that fact.
1260
1261[endsect]
1262
1263[section:tested_compilers Tested compilers]
1264
1265[*Boost.Atomic] has been tested on and is known to work on
1266the following compilers/platforms:
1267
1268* gcc 4.x: i386, x86_64, ppc32, ppc64, sparcv9, armv6, alpha
1269* Visual Studio Express 2008/Windows XP, x86, x64, ARM
1270
1271[endsect]
1272
1273[section:acknowledgements Acknowledgements]
1274
1275* Adam Wulkiewicz created the logo used on the [@https://github.com/boostorg/atomic GitHub project page]. The logo was taken from his [@https://github.com/awulkiew/boost-logos collection] of Boost logos.
1276
1277[endsect]
1278
1279[endsect]