Guest wrote:Here are the results of a small simulation experiment. For each of the four setting pairs I generated one pair of measurement outcomes. They happened to be as follows:
a, b: +1, +1
a, b': +1, -1
a', b: -1, +1
a', b': +1, -1
I find correlations +1, -1, -1,-1 and CHSH = 4 which is much larger than 2 sqrt 2.
Please explain. Clearly, this pattern could just as well continue for much larger numbers of measurements. It is always possible to get CHSH = 4. It is always possible to get CHSH = -4. With many, many observations one can get values in between, spread out as evenly as you like.
Guest wrote:Here are the results of a small simulation experiment. For each of the four setting pairs I generated one pair of measurement outcomes. They happened to be as follows:
a, b: +1, +1
a, b': +1, -1
a', b: -1, +1
a', b': +1, -1
I find correlations +1, -1, -1,-1 and CHSH = 4 which is much larger than 2 sqrt 2.
Please explain. Clearly, this pattern could just as well continue for much larger numbers of measurements. It is always possible to get CHSH = 4. It is always possible to get CHSH = -4. With many, many observations one can get values in between, spread out as evenly as you like.
jreed wrote:Guest wrote:Here are the results of a small simulation experiment. For each of the four setting pairs I generated one pair of measurement outcomes. They happened to be as follows:
a, b: +1, +1
a, b': +1, -1
a', b: -1, +1
a', b': +1, -1
I find correlations +1, -1, -1,-1 and CHSH = 4 which is much larger than 2 sqrt 2.
Please explain. Clearly, this pattern could just as well continue for much larger numbers of measurements. It is always possible to get CHSH = 4. It is always possible to get CHSH = -4. With many, many observations one can get values in between, spread out as evenly as you like.
What are you using for the CHSH inequality? Here's an easy explanation of the inequality. Suppose I have two sets of measurements, A, B and A', B'. Then the CHSH inequality is:
Since the measurements must all be +1 or -1 the value of M is always equal to plus or minus 2. This can be easily seen by rewriting M as:
When averaging over a large number of experiments, we obtain M greater than or equal -2 and less than or equal 2. That's all there is to it.
jreed wrote:What are you using for the CHSH inequality? Here's an easy explanation of the inequality. Suppose I have two sets of measurements, A, B and A', B'. Then the CHSH inequality is:
Since the measurements must all be +1 or -1 the value of M is always equal to plus or minus 2. This can be easily seen by rewriting M as:
When averaging over a large number of experiments, we obtain M greater than or equal -2 and less than or equal 2. That's all there is to it.
Joy Christian wrote:jreed wrote:What are you using for the CHSH inequality? Here's an easy explanation of the inequality. Suppose I have two sets of measurements, A, B and A', B'. Then the CHSH inequality is:
Since the measurements must all be +1 or -1 the value of M is always equal to plus or minus 2. This can be easily seen by rewriting M as:
When averaging over a large number of experiments, we obtain M greater than or equal -2 and less than or equal 2. That's all there is to it.
The above argument is fundamentally flawed, as Fred has explained above. The elementary mistake in it is the same mistake that Gill et al. keep making, and it is the same mistake that all Bell devotees keep making. It has been repeatedly explained on these pages, especially by "minkwe." See viewtopic.php?f=6&t=181#p4912.
I have explained the mistake in the above argument in my own way in this paper. See, especially, Eqs. (12) to (26). In essence the seemingly innocent step from the first expression above to the second by "rewriting M" is a cheat. Physically it is nonsense, as I have explained in my paper. All experiments to date have rejected it.
More specifically, the correct expression of M is actually the following:
.
The cheat in the above argument is in surrupticiouly replacing this sum of four separate averages with the following single average:
.
Without this replacement the last step does not follow. But this innocent looking replacement is not justified for the physical experiments, as explained in my paper.
jreed wrote:In your so-called correct expression M will always tend to zero. Recall that in EPR experiments, Alice and Bob are free to choose the angles of their detectors. This means that the averages <AB>, <AB'>, <A'B> and <A'B'> will all go to zero since they are uncorrelated if Alice and Bob's choices are random and uncorrelated.
Joy Christian wrote:jreed wrote:What are you using for the CHSH inequality? Here's an easy explanation of the inequality. Suppose I have two sets of measurements, A, B and A', B'. Then the CHSH inequality is:
Since the measurements must all be +1 or -1 the value of M is always equal to plus or minus 2. This can be easily seen by rewriting M as:
When averaging over a large number of experiments, we obtain M greater than or equal -2 and less than or equal 2. That's all there is to it.
The above argument is fundamentally flawed, as Fred has explained above. The elementary mistake in it is the same mistake that Gill et al. keep making, and it is the same mistake that all Bell devotees keep making. It has been repeatedly explained on these pages, especially by "minkwe." See viewtopic.php?f=6&t=181#p4912.
I have explained the mistake in the above argument in my own way in this paper. See, especially, Eqs. (12) to (26). In essence the seemingly innocent step from the first expression above to the second by "rewriting M" is a cheat. Physically it is nonsense, as I have explained in my paper. All experiments to date have rejected it.
More specifically, the correct expression of M is actually the following:
.
The cheat in the above argument is in surrupticiouly replacing this sum of four separate averages with the following single average:
.
Without this replacement the last step does not follow. But this innocent looking replacement is not justified for the physical experiments, as explained in my paper.
jreed wrote:I just finished some simulations to check this out. I used your algorithm with 10,000 trials and wrote two programs. One program calculated CHSH using:
and the other used:
I was careful to initialize the random variables prior to the calculation of each term in the first program, and only initialized it once in the second program. The detector angles were not changed randomly. The spin vector is the only random variable.
The results indicate that the values of CHSH are identical up to random noise, close to 1.4 for each program. Since zeros were not removed, the CHSH values don't violate the inequality CHSH < 2. Either way seems to work so your problem in the posting above is of no concern.
jreed wrote:I just finished some simulations to check this out. I used your algorithm with 10,000 trials and wrote two programs. One program calculated CHSH using:
and the other used:
I was careful to initialize the random variables prior to the calculation of each term in the first program, and only initialized it once in the second program. The detector angles were not changed randomly. The spin vector is the only random variable.
The results indicate that the values of CHSH are identical up to random noise, close to 1.4 for each program. Since zeros were not removed, the CHSH values don't violate the inequality CHSH < 2. Either way seems to work so your problem in the posting above is of no concern.
Joy Christian wrote:jreed wrote:I just finished some simulations to check this out. I used your algorithm with 10,000 trials and wrote two programs. One program calculated CHSH using:
and the other used:
I was careful to initialize the random variables prior to the calculation of each term in the first program, and only initialized it once in the second program. The detector angles were not changed randomly. The spin vector is the only random variable.
The results indicate that the values of CHSH are identical up to random noise, close to 1.4 for each program. Since zeros were not removed, the CHSH values don't violate the inequality CHSH < 2. Either way seems to work so your problem in the posting above is of no concern.
Garbage in, garbage out.
jreed wrote:"Garbage in, garbage out". Is that the best you can do as a response? Then I must assume that you agree with my statement that it doesn't make any difference which way CHSH is computed. I'm happy to see you understand this.
//Adaptation of Albert Jan Wonnink's original code
//http://challengingbell.blogspot.com/2015/03/numerical-validation-of-vanishing-of.html
function getRandomLambda()
{
if( rand()>0.5) {return 1;} else {return -1;}
}
function getRandomUnitVector() //uniform random unit vector:
//http://mathworld.wolfram.com/SpherePointPicking.html
{
v=randGaussStd()*e1+randGaussStd()*e2+randGaussStd()*e3;
return normalize(v);
}
batch test()
{
set_window_title("Test of Joy Christian's CHSH derivation");
N=20000; //number of iterations (trials)
I=e1^e2^e3;
s=0;
a1=1.00*e1 +0.01*e2 + 0.01*e3;
b1=0.707*e1 + 0.707*e2 + 0.01*e3;
a2=0.01*e1 + 1.00*e2 + 0.01*e3;
b2=0.707*e1 + -0.707*e2 + 0.01*e3;
for(nn=0;nn<N;nn=nn+1) //perform the experiment N times
{
lambda=getRandomLambda(); //lambda is a fair coin,
//resulting in +1 or -1
mu=lambda * I; //calculate the lambda dependent mu
A1=-mu.a1;
A2=-mu.a2;
B1=mu.b1;
B2=mu.b2;
q=0;
if(lambda==1) {q=(A1 B1)+(A1 B2)+(A2 B1)-(A2 B2);}
else {q=(B1 A1)+(B2 A1)+(B1 A2)-(B2 A2);}
s=s+q;
}
mean_F_A_B=s/N;
print(mean_F_A_B, "f");
prompt();
}
jreed wrote:"Garbage in, garbage out". Is that the best you can do as a response?
Joy Christian wrote:PS: See also my response to "minkwe" in the previous thread where I point out the difference between commuting raw scores and non-commuting standard scores in the present context: viewtopic.php?f=6&t=196&start=40#p5426.
FrediFizzx wrote:I think you and Heine have misread what Joy meant. I believe what Joy meant is that in,
The expectation terms are to be taken as independent terms. So that you can't factor like you do in the overall average. IOW, probably better expressed as something like,
with a result of mean_F_A_B = 2.828200 + -0.000053*e2^e3 + -0.000022*e3^e1, that the average is being taken across the sum of expectation terms and not individually. But I don't think that was really the issue here.
Ok, let's get back on topic now. And with,
It is possible to get 4. But with,
It is only possible to get as the maximum value. Of course there is a subtle cheat involved to be able to get that. What is it? Let's see if you were paying attention when Michel explained it.
jreed wrote:There is no A3, B3, A4, or B4. In these experiments there are two sets of angles, {a,b} and {a',b'} which the detectors are set to in each experiment and can be switched back and forth.
Return to Sci.Physics.Foundations
Users browsing this forum: ahrefs [Bot] and 7 guests