Friday, October 16, 2009

The Nature of Probability

"To throw the dice is to face that which is given by the gods, by powers higher than human. It is to face reality at its most mysterious, like standing unflinching before the thunderstorm."

"Yet the human spirit is restless and nature forever compliant, willing to answer as yet undreamed questions, capable of opening up vast new vistas, revealing still undisclosed parts of her being."

(This one's for the newbie, because I think you are in danger of 'getting it'. ;)

"Probability is a way of expressing knowledge or belief that an event will occur or has occurred...

The word probability does not have a consistent direct definition. In fact, there are two broad categories of probability interpretations, whose adherents possess different (and sometimes conflicting) views about the fundamental nature of probability:

Frequentists talk about probabilities only when dealing with experiments that are random and well-defined. The probability of a random event denotes the relative frequency of occurrence of an experiment's outcome, when repeating the experiment. Frequentists consider probability to be the relative frequency "in the long run" of outcomes.[1]

Bayesians, however, assign probabilities to any statement whatsoever, even when no random process is involved. Probability, for a Bayesian, is a way to represent an individual's degree of belief in a statement, given the evidence." (q)

I would submit that there is a third category of probability interpretation. Like Bayesian probability, it deals with the subjective aspect of knowledge, and it updates as the relative 'force' of a particular piece of knowledge or observation changes in the cognitive space.

But this third category of probability interpretation is entirely concerned with predicting state selection. (By the time I'm done with this post I'll have a catchy name for it.) Not traditional state selection though, which is still presumed to reflect a collapse of the wave function in an objective reality. This third category of probability (subjective state selection probability?) is not concerned with using subjective knowledge to measure the potential for a particular decision or action, but rather with using subjective elements of conscious experience to predict future elements of that same cognitive space. To emphasize how such a definition of probability would be superior to other definitions in predicting outcomes, it becomes necessary to compare them in a common context - hence my emphasis on situations involving 'randomness'.

When faced with a coin flip, classical probability will attempt to predict the outcome of the flip (heads or tails), using only information about any potential bias the coin may have. To be honest, I'm not entirely sure what Bayesians would be attempting to do in this situation. Quantify how they should update their beliefs regarding the bias of the coin?

Those of us using cognitive differential probability will attempt to quantify aspects of our cognition with respect to the as-yet-unobserved outcome in an attempt to predict and/or modify the biases governing the selection of the outcome state. Here's where things get different.

Cognitive differential probability (are we liking that name?) recognizes that different amounts of knowledge about the coin, as well as different attachments to the outcome of the coin flip and other elements of anticipation regarding the outcome of the coin flip, constitute biases in the process of selecting the final state of the flipped coin. Early on in the game I talked about the differences in these knowledge structures. The biases we're talking about change with each additional observation, but they also change in response to various elements of pure cognition. (I know - you either believe it's possible, or you don't.)

I think Bayesian's recognize the constantly-updating weight of subjective beliefs, but I don't think they are using it to predict outcomes in this way. Nor do I think that they model 'beliefs' as comprehensively as I propose to model cognitive space. So where does this leave us?

Once I can figure out how to model degree of overlap in the various elements of cognitive space/representation, modified bi-directionally across time, with suitable boundary conditions to prevent infinite regression yet still accurately predict outcome selection... Once I can do that for a single observer, I'll be back to haunt you. ;)

Sunday, October 11, 2009

I Have A Dream

"But for about 20 years now, a controversial area of scientific research has sought to determine whether a supernatural power, invoked through prayer and working alongside doctors, can cure illness... The research involves intercessory prayer, or people interceding on someone else's behalf..." - Can Prayer Help Heal?, Wisconsin State Journal, October 11, 2009.

I have a dream... that one day we will accept effects like this without resorting to God as a default cause. We will accept that 'prayers' are simply thoughts that anticipate future outcomes in a variety of ways, and we will understand outcome selection as a collective effort. "It's not about us controlling God"... but it is about us controlling the process of state selection.

I have a dream... that one day no one will dream of thinking that 'prayer' can replace modern medicine. I don't think that God is behind these effects, but I also don't think that you have enough control to trust the outcome of someone else's life to your thoughts.

I have a dream... that one day we will study these effects without thinking that the only purpose of such results is to prove or disprove the existence of God. I have a dream... that one day the skeptics will realize that it is neither necessary nor correct to accept randomness as the fundamental state of the universe. A dislike for religious explanations as the alternative to randomness should not push us into prematurely accepting that perceived randomness is not connected to thought.

I have a dream... that one day you will see what is right in front of you. You will understand how patients who have been told that they are being prayed for will develop different expectations about their outcomes. We will understand how to give patients information in a way that maximizes their ability to push themselves towards the outcome that they desire. And we will understand their right to choose whether or not we intercede in how we weave our thoughts around their problem.

I have a dream... that one day statements like this - "I think God and God alone chooses whether you have a miracle." - will be obsolete, not only because we will understand the effects of 'prayer' as something other than the intervention of a divine being, but also because once such things are understood, they will no longer be considered 'miracles'. One day God will no longer be the default explanation for all 'coincidental' or inconvenient occurrences.

I have a dream... that you will one day have the same epiphany that I had while counting bacteria in a microbiology lab. If my thoughts can determine whether I will find these bacteria alive or dead, then this entire endeavor is pointless and there is a "bigger problem" that I should be working on.

I have a dream... that one day we will find a meaning behind such a system, because I don't like the idea that life/conscious experience can be this capricious without a reason.

I have a dream... that someday one of you will want to talk with me about this idea, and together we will be able to expand this research.

Saturday, October 3, 2009

Measuring the Immeasurable

"Essential science and common sense keep coming back to data, fact and observation."

The fact that I can't lay my hands on this paper immediately now offends my hacker sensibilities. Grrr. (That paper would be Radin, D, & Atwater, F.H., Exploratory Evidence for Correlations Between Entrained Mental Coherence and Random Physical Systems, Journal of Scientific Exploration, Vol 23 (3), 2009, for the google search engines. ;)

And yet because I can't lay my hand on it, I can speculate about what the author actually did. (This is a fun game that all grad students should learn to play. Read only the abstract and flesh out what you think the paper should contain. Then go back and compare your notes to the actual paper. Worship anyone who exceeds your expectations.)

I'm temporarily suppressing my 'goody, goody gumdrops!' reaction to the topic in favor of my more prosaic 'please tell me you're going to discuss competing explanations of the phenomenon' reaction. If I were really, really in a good universe, the data collection and distribution processes would have been designed with various competing ideas about Observer Theories and multiple-observer dynamics in mind. (Yours truly is really looking forward to the day when we can stop 'proving' that there is an effect, and focus on using experimental data to tear apart competing theories.)

So here are the questions and ideas that came to mind when I read this abstract/blogpost.

Who were the subjects?

In an ideal study, they would be trained meditators who had been evaluated and matched on self-assessed descriptions of their meditative experiences, and various psychological parameters such as absorption. This helps ensure that any effects obtained are not the result of a single 'strongest' observer. Ideally such subjects could also be pretested to determine their individual ability to affect an RNG in the desired state of consciousness.

Depending on the nature of the research questions being asked, the groups of subjects would also vary in size, to attempt to correlate magnitude of effect with number of 'coherent' observers. The magnitude of effect achieved by trained meditators might also be compared to that achieved by untrained subjects with similar psychological parameters. (Previous studies suggests a straightforward prediction of 'more coherent observers = greater deviation from pure randomness'.)

What was done to eliminate/minimize an experimenter effect?

Let me try to be as clear as possible while not writing a separate treatise on the topic. At some point it's going to become necessary to establish that any observed deviation from randomness is not due to a time-displaced effect generated by the one observer who views all the critical data in a form most-suited for discriminating between outcome options - the experimenter. This is a minor point if you are trying to prove that there is an effect, but a relatively major one if you are trying to establish the source of the effect. (Click here for a discussion on taking the next step with observer theories. Awesome reference list for those who scroll all the way to the bottom of that page.)

Similar arguments could be made for various other ways of appropriately isolating the results/observations, but this can quickly get out of hand. It'll probably only be useful when the task is no longer to 'prove' the effect, but to model the data according to a specific theory/set of theories.

What does this mean - "An exploratory hypothesis predicted that fluctuations in entrained mental coherence associated with the workshop activities would modulate the random data recorded during the workshops." ? (My emphasis.)

I'm not that familiar with binaural-beat rhythms or the specific effects that they have on consciousness/brain waves/neurochemicals/etc. (Note to self: Read that paper.) I suspect that the author is using 'fluctuations' to refer to the difference between the 'coherent' and the (presumably) noncoherent times during the trial period, rather than to differences within the period of exposure to the binaural-beat rhythms itself. Although now I'm beginning to wonder how precisely the latter could be modelled...

What justifies this time frame? "Coherence was entrained by having groups listen to a prescribed series of binaural-beat rhythms during a 6-day workshop." (My emphasis, again.)

Is it necessary to use 6 days of data because the subjects were unfamiliar with the technique prior to the workshop? Was the same amount of data collected on all six days? If so, are there any differences in the data from 'coherent' periods in days 1 and 2, and the same periods in days 5 and 6?

"Random data were continually collected from these RNGs during 14 workshops." That's 84 days of data for each of 3 RNGs, and another 56 days of data for the control period. (Respectful pause for the enormity of the data set.) Now I wonder how robust any individual set of data would have been...

I'm beginning to suspect that this experiment took advantage of an existing program in order to collect data. Therefore, all the parameters that one might like to control or manipulate were not necessarily available for such control. (I could be wrong. You should, of course, read the actual paper.)

What, exactly, is being correlated, and why? "This was predicted to result in positive correlations between random data streams collected from one workshop to the next."

It sounds like data streams from the 'coherent' periods were correlated to each other, across workshops...

Now, I'm no expert on analyzing enormous sets of random numbers. I couldn't give you the algorithm for determining that one set of numbers is less 'random' than another, although it might be fun to see if I could derive one. But it seems like there is only one thing to compare...

So, is the degree of randomness being compared? And if so, why isn't the degree of randomness in the set of numbers from the 'coherent' period being compared to the degree of randomness in the set of numbers from the control period? I understand comparing data within device, within group, and within workshop. And I understand comparing 'coherent' periods to control periods. What I don't understand is what could be compared across workshop but within 'coherent' data? Especially if subjects were not matched on any critical parameters.

From the next sentence - "Results showed that during the workshops the overall correlation was positive, as predicted (p = .008); during control periods the same RNGs produced chance results (p = .74). " - it sounds like the correlation was between the time period (coherent/noncoherent) and the degree of randomness. Presumably, a lower degree of randomness was seen in the data during the 'coherent' times than was seen during the noncoherent times. (Now I also want to know the raw magnitude of the difference in degree of randomness.)

So what, if anything, was compared across workshop but within 'coherent' periods? Direction of deviation from random center point? Magnitude of deviation from randomness? Temporal location of deviations from randomness relative to 'coherence' activity?

I are confused. Even the espresso-laced truffles aren't enough to break the mental deadlock on this. If this becomes clearer to me at any point, I'll post an update.

[Aside: If you are a certain author of this paper, that was probably more than you ever wanted to hear from any single reader. Consider it payback for taunting me with the paper but not giving me the actual paper. ;) ]

Update 10/29/09 : I finally got my hack on and scored a copy of this article. (It's been a busy few weeks.) I didn't realize the extent to which this study would emphasize the "felt sense when individual thoughts and actions seem to merge into a single group thought or action." I can't speculate as to why the individuals' "subjective shift" towards the "same wavelength" might be an important element in the success of a collective effort to affect 'randomness'. Nor am I entirely sure why this cohesiveness is necessary, or qualitatively different than all participants simply focusing on the same outcome. According to the Global Consciousness Project, the simultaneous focused attention of large groups can produce similar effects. (And the thought of a "group mind" scares me a little bit.)

As anticipated, the subjects were participants in a workshop designed for other purposes.

As for my botched attempt to understand the data analysis... I had an entirely different picture of what the data stream from the RNGs would look like. In actuality, the data stream consisted on 0s and 1s. The authors predicted that these data streams "should have been modulated by mind in approximately the same way." This could mean one of two things - either the actual values produced by each RNG were the same (all 0s), or the change from one data point to the next was equally present (or not) in each data stream. (Ex: With starting values of 0,0,1, the next set of bits are 1,1,0. Each bit changed from its previous value.) At this point my understanding of the data analysis breaks down. Again. I understand the math, but not the rationale for choosing this method. I refer you to the original paper. Again.

There is one other issue that puzzles me - the belief of the authors that the physical proximity of the RNG machines to the subjects was somehow important to the results. "Data generated by the same RNGs run at distant locations... did not" show the same correlations that were evident in the close-proximity RNGs. In fact, from my perspective, there is no reason to expect such a difference, unless the information, feedback, or expectations were different between the close and the distant RNGs.

Though no attempt at discrimination was made between theories about the effects, the author did acknowledge the potential for experimenter effects. Given the authors' access to the data and expectations about the data, especially relative to those of the subjects (who were largely ignorant of the RNGs and the data that was being collected), I would argue that perhaps a good portion, if not all of the effect, was an experimenter effect. The fact that RNGs were placed at a distance for the test site, as well as in close proximity to the subjects, indicates that one or more of the authors may have had expectations of seeing a difference in the data based on physical distance. At this point, the test is to see if the experimenter can replicate the experiment but produce data that indicates positive correlations in the distant RNGs as well as the close RNGs, by will alone.

Tricky business, but I have a sneaking suspicion that a better understand of these effects will only come if we are much more precise in identifying who has the most information about the experiment, and his/her motivations and expectations for the experiment/outcome. (Yeah, I'm still looking for that poster I promised you earlier. ) For all we know at this point, my observation of this paper had a significant effect on the outcome. We would test that idea by providing the data/results to all interested parties simultaneously, and then destroying them to prevent future observations that might influence the outcome. This is the antithesis of academic publishing, but it's critical for testing/eliminating certain elements of 'retroactive-PK'.

If you haven't quit reading this post yet, then extra points for you, but really - what's wrong with you?! ;)