Some of the analyses for the Global Consciousness Project are quite
complex, and a number of explorations have been done. We present here
further explanatory texts to
supplement the brief descriptions in the Y2K results pages, and to
document some of the exploratory analyses.
Dean Radin's first analytical examination of the Y2K data is summarized in two
figures,
one that shows the median absolute raw deviations
for blocks of data centered on midnight in each time zone, and one
based on these values converted to Z-scores and ultimately to an
odds ratio, which is
plotted against time. These analyses have been replaced, but remain
of interest as one of the early steps toward an effective approach.
The detailed description
of the steps in Dean's analysis is both informative and interesting, and
provides some insight into the search for an incisive strategy.
Low vs High Population Time Zones
One of the analyses suggested by the previous new year's data was a separation
of data according to the population of the time zones.
The following is Radin's detailed description of an extension of the previous
analysis (Jan 9, 2000), looking at the effect of
population density,
and by implication, the amount of attention and celebration that would occur
in different time zones.
Here's my latest analysis, exactly as I did before, only now split by high population
(HP) vs. low population (LP) time zones. The hypothesis is that all eggs would respond
to the stroke-of-midnight moment of coherence, but there would be different "amounts" of
coherence created as each timezone passed midnight, given that the world's population is
not distributed uniformly.
I've defined LP zones as -12, -11, -10, -9, -2, -1, +4, +6, +7, +11 based on
examination of the world timezone map (www.worldtimezone.com) as compared to the world
population in different countries, which I estimated through examination of the
US Census web site (that site has an extensive international population database).
Figure 1 shows the average median absolute deviation curves for the HP and LP,
and Figure 2 shows the one-tailed odds against chance for the z score of the
difference between the two curves. I've used a one-tailed test because I assume that
the HP curve would drop below the LP curve at the stroke of midnight, reflecting a greater
negentropic change for the HP time zones. The graph shows that the largest drop,
and highest odds against chance, occurs 9 seconds before midnight.
Ed May has brought to my attention that the two eggs in India are in time zones
that run on-the-half-hour with respect to GMT. I have not adjusted this
analysis for these those eggs.
Corrected Analysis
Since the preliminary analysis on 2 January, 2000, we have
identified a conceptual error, making the analysis centric to the GMT
(UTC) time zone. Although the result showed a
striking spike at midnight, it was not properly representative of Dean's
original prediction. A corrected analysis addressing the
intended question was completed on 23 January.
This analysis has been thoroughly cross-checked, with the
cooperative oversight of an independent
observer, Ed May, and includes comparisons with the results of the exact
same analysis applied to data from 1, 2, and 15
days after the Y2K rollover.
Dean describes the new analysis, and discusses the
impact of the exploratory mode, including the problem of multiple
analyses, in the email accompanying the figures.
Subject: re-tested Y2K analysis
Date: Sun, 23 Jan 2000 01:43:39 -0800
Attached are two pictures of my latest Y2K analysis. Performed from
scratch, with freshly downloaded data from Y2K. You'll see results for
Y2K along with identical analyses for Y2K+1 day, +2 days and +15 days.
The obvious graphical results are confirmed by permutation analysis, in
which I randomized the per-second time sequence 1,000 times. The new
analysis, using all eggs across all time zones, is very significant within
a few seconds of midnight. Odds are above 80,000 to 1, 2-tail, as you see
in the odds graph.
Before I tell you how I calculated these graphs (they are similar to
before, using a sliding window), the discussion Ed and I have been having
about this have sparked an interesting issue about how to interpret
multiple analyses in exploratory mode. If I try say, 10 different
analyses looking for ways to optimize a spike at Y2K, and the final
analysis shows odds of 80K to 1 at the moment of interest (i.e., a spike
that peaks a few seconds from midnight), then a Bonferroni correction will
still result in a healthy significance. E.g., 80,000 / 10 = odds of
8,000 to 1. But if I ran 10,000 analyses to find this spike, then after a
Bonferroni correction that result would be null.
Ok then. In this case I ran 10 different variations of my previous
analysis to find this spike. I basically followed some hunches, and I
quickly found the results you see in the graph. The observed spike
magnitude, combined with a spike time as close to midnight as observed,
is over 4 sigma from chance, according to both permutation analysis and by
comparing against the point means and standard errors from the epoch
curve. You don't get anything like this using identical analyses applied to
data from days +1, +2 and +15 from Y2K.
So, is the new analysis meaningful, or not? I think it is, because
regardless of the complexity of the method used to achieve these graphs,
it just shouldn't be so easy to get such a good result by chance. But
maybe Ed is right, and I'm just a good analyzer? If that's true, then
virtually all psi experiments may simply reflect how good the analyst is.
Come to think of it, if psi-type information flow is in fact like I suspect it
is, then this sort of anomalous analysis is analogous to clairvoyance, in
which rather than randomly poking about an infinite analysis space, I can
somehow jump into that abstract realm and select the right path to take.
Oh, this is becoming too complex for my brain this late at night, so I'm
going to sleep now!
GCP Home