diff options
author | Johan Lorensson <lateralusx.github@gmail.com> | 2017-10-19 17:43:03 +0300 |
---|---|---|
committer | Zoltan Varga <vargaz@gmail.com> | 2017-10-19 17:43:03 +0300 |
commit | 0f9bbb90fbb81ebc6892e012b3bf89ea26a25110 (patch) | |
tree | 7b37a462347b5f2945e270c2813843f9b402b0b8 /support | |
parent | 1e29ed0d7e5bc917d3953760ba9ecb924f1abec0 (diff) |
[runtime] Rename atomic functions from the win32 style naming to mono_atomic_<op>_<type>, with a consistent signature on all platforms, including Windows implementation. (#5767)
* [runtime] Rename atomic functions from the win32 style naming to mono_atomic_<op>_<type>, with a consistent signature on all platforms. This fixes a large number of warnings on windows.
* Add Windows implementaiton of mono_atomic_*.
* Windows implementation of mono_atomic_* inline functions.
* Changed naming typo in mono_atomic_xchg_i32Add and mono_atomic_xchg_i32Add64.
* Fixed some additional signed/unsigned/volatile warnings when using mono_atomic_*
* Fixed some smaller additionl warnings.
* Fix Interlocked* to mono_atomic_* name change in signal.c
* Additional name adjustment of atomics.
Aligning more towards C11/C++11 standard namings:
mono_atomic_xchg_add_i32|i64 -> mono_atomic_fetch_add_i32|i64
Changed from mono_atomic_add to mono_atomic_fetch_add in cases when return
type is not used.
Also includes small mingw build fix on Windows.
* Aligned loads with C++11 implementation using explicit compiler barrier.
On x86/x64 Windows, reading a properly aligned 8,16,32 bit volatile variable
(using /volatile:ms extension) should have acquire semantics. On x64
this also includes 64-bit properly aligned volatile variables.
The C++11 implementation does however include an explicit _ReadWriteBarrier in
its sequentially consistent implementation to instruct compiler to not reorder
load/stores. Since compiler supports two modes of volatile behavior (ms/iso)
this additional barrier is probably there for consistency, independent of the
behavior of volatile keyword.
NOTE, the x86/x64 CPU architecture has strong guarantees regarding load/store of
memory operations, so issuing a CPU memory barrier for loads should not be needed
(and is not done in C++11 atomics implementation).
This commit also adds a couple of optimizations for 32-bit loads and for x64,
64-bits loads. The C++11 implementation uses the same pattern loading them as 8
and 16 bits variables, so no need for Interlocked* calls to load 32-bit and for x64
64-bit variables.
Diffstat (limited to 'support')
-rw-r--r-- | support/signal.c | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/support/signal.c b/support/signal.c index 993e2e21661..426e268f43e 100644 --- a/support/signal.c +++ b/support/signal.c @@ -115,12 +115,12 @@ int Mono_Posix_FromRealTimeSignum (int offset, int *r) // We can still use atomic.h because that's all inline functions-- // unless WAPI_NO_ATOMIC_ASM is defined, in which case atomic.h calls linked functions. #ifndef WAPI_NO_ATOMIC_ASM - #define mph_int_get(p) InterlockedExchangeAdd ((p), 0) - #define mph_int_inc(p) InterlockedIncrement ((p)) - #define mph_int_dec_test(p) (InterlockedDecrement ((p)) == 0) - #define mph_int_set(p,n) InterlockedExchange ((p), (n)) + #define mph_int_get(p) mono_atomic_fetch_add_i32 ((p), 0) + #define mph_int_inc(p) mono_atomic_inc_i32 ((p)) + #define mph_int_dec_test(p) (mono_atomic_dec_i32 ((p)) == 0) + #define mph_int_set(p,n) mono_atomic_xchg_i32 ((p), (n)) // Pointer, original, new - #define mph_int_test_and_set(p,o,n) (o == InterlockedCompareExchange ((p), (n), (o))) + #define mph_int_test_and_set(p,o,n) (o == mono_atomic_cas_i32 ((p), (n), (o))) #elif GLIB_CHECK_VERSION(2,4,0) #define mph_int_get(p) g_atomic_int_get ((p)) #define mph_int_inc(p) do {g_atomic_int_inc ((p));} while (0) |