This is answering just the updated part of your Question.
First of all, the example you have presented is not Java code. Therefore we cannot apply JMM reasoning to it. (Just so that we are clear about this.)
If you want to understand how Java code behaves, forget about memory barriers. The Java Memory Model tells you everything that you need to do in order for the memory reads and writes to have guaranteed behavior. And everything you need know in order to reason about (correct) behavior. So:
- Write your Java code
- Analyze the code to ensure that there are proper happens before chains in all cases where on thread needs to read a value written by another thread.
- Leave the problem of compiling your (correct) Java code to machine instructions to the compiler.
Looking at the sequences of pseudo-instructions in your example, they don't make much sense. I don't think that a real Java compiler would (internally) use barriers like that when compiling real Java code. Rather, I think there would be a StoreLoad
memory barrier after each volatile write and before each volatile read.
Lets consider some real Java code snippets:
public int a;
public volatile int b;
// thread "one"
{
a = 1;
b = 2;
}
// thread "two"
{
if (b == 2) {
print(a);
}
}
Now assuming that the code in thread "two" is executed after thread "one", there will be a happens-before chain like this:
a = 1
happens-before b = 2
b = 2
happens-before b == 2
b == 2
happens-before print(a)
Unless there is some other code involved, the happens-before chain means that thread "two" will print "1".
Note:
- It is not necessary to consider the memory barriers that the compiler uses when compiling the code.
- The barriers are implementation specific and internal to the compiler.
- If you look at the native code you won't see memory barriers per se. You will see native instructions that have the required semantics to ensure that the (hidden) memory barrier is present.