The Global Consciousness Project is a long term effort to
learn about a subtle phenomenon or effect. We are looking
where many sensible people will say there is nothing to see,
in other words, we're at work near or at the boundaries of
what we know. The first question is, then, whether there is
an effect, and since it is at best a subtle one, we are in
the realm of the "low signal to noise ratio." Often written
simply as low S/N, this is a familiar region for many
sciences, so there are tools, mainly statistical, but also
philosophical to deal with it.
The GCP effects are generally very small compared with
noise -- it's a classic situation, and there are
implications. The
S/N ratio is so small we can't reliably detect effects of
even major events if we look at only one case. We need a
dozen or two events of a similar kind to get reliable
statistical estimates. The implication is that individual
personal influences on the system will be too small to
detect, although they may be contributing to the overall
"global consciousness."
Similarly, the effect of "local" influences is difficult or
impossible to detect or measure, though this is an idea
that comes naturally to many people. The GCP instrument is
designed for a different purpose.
The primary measure we use computes a composite
across the whole network which is an average correlation of
pairs of eggs. This means that a local influence (close proximity)
actually doesn't quite mean what it seems to in terms of our
measure. That is, if there is an influence on an egg in
Japan, and an influence on one in Alaska, those inflences
won't affect the correlation -- unless they are
similar, synchronized influences. Of course that
describes what we have in mind when we talk about global consciousness
operationally. We say that global consciousness happens when
large numbers of people are
engaged by the event and have the same response; they share
thoughts and emotions. This shared response may
reach a level of coherence that is capable of supporting an
information (or consciousness) field that becomes the source
of the effects we measure.
Returning to the S/N ratio, we can see that the "ifs" are
numerous. More to the point, we know from more than a decade
of research that the average effect size is about a third of
a standard deviation. This means that we have to put many
individual events (hypothesis tests) together in a kind of
meta-analysis to achieve reliable statistics. We need at
least a dozen strong cases, or several dozen on average to
draw sensible conclusions about whether there is any "there
there" as Gertrude Stein so colorfully put it.
Here is a caveat which appears after each analysis (to cool
down the excitment over an apparently off-the-scale result, or to
assuage the disappointment over a visualized flatline.)
It is important to keep in mind that we have only a tiny
statistical effect, so that it is always hard to distinguish
signal from noise. This means that every "success" might be
largely driven by chance, and every "null" might include a
real signal overwhelmed by noise. In the long run, a real
effect can be identified only by patiently accumulating
replications of similar analyses.
|