"The imagination must be allowed to express itself freely, otherwise it will die."
Nice! And challenging to answer.
I can try to do this with an analogy... I see a moment of conscious experience in much the same way we experience a moment in a symphony. There are many instruments that are playing (or not) at any given moment, and with varying degrees of loudness. And most of us don't stop to parse the whole moment into its component parts.
The violins are almost always playing, and it's fairly easy to detect their influence. To me, they are like the 'past', or the influence of memory upon an individual moment of conscious experience. The violas are much fewer in number relative to the violins, and to me they represent the influence of short-term and working memory - information that is not as firmly anchored as that which resides in long-term memory. Their contribution is generally much harder to detect or parse from the whole. The cellos represent the near-future - that is, the future for which critical events have already been set in motion. Occasionally someone writes a short cello solo, or silences the other instruments to the point where the cello is noticeable. The basses represent the far future. They are generally fewer in number, and their presence in the whole moment is usually more difficult to detect without training.
While each of the four string instruments has a different range of resonance, they all share the property of being a stringed instrument. (I.e., It's all time in some sense - change from one moment to the next.) One could also argue that an individual orchestra (brain/consciousness) can be organized (wired) with a different, though mostly overlapping, set of instruments from any other orchestra. But the strings are still the foundation of any orchestra.
While listening to a symphony, it's possible to momentarily lose track of the whole sound as you choose to isolate and pay attention to the sound of one particular instrument. But all the other sounds are still there. You recognize the sound of a particular instrument because of your previous exposure to that instrument. (I played the cello for four years.) It's easier to isolate and hear/see the influence of that instrument with practice.
In this sense, time to me is like a specific series of notes within a whole moment. Some of the notes may come of the cello, but most of the notes I'm conscious of probably come from the violin. This is quite a subjective, self-absorbed way of looking at time, because I'm parsing the composition of my orchestra, rather than assuming that we are all listening to the same orchestra. So how do I visualize the objective timeline of consensus reality - that thing that moves from the Big Bang through the Middle Ages and into the future?
When I'm thinking in 5 dimensions, I usually don't. I see those events/ideas primarily (though not entirely) as moments in my symphony that reflect influences from my subjective past/future. If an idea or event is something that I have been exposed to many times, then the moment reflects a strong influence from the past (violins). But it may reflect my subjective past in a variety of ways. (Think range of notes, and don't forget that there are other instruments playing as well.)
Sounds like an incredibly self-absorbed perspective, doesn't it? I haven't expanded this analogy to the point where I can explain how I see multiple-observer effects yet. But I'm sure some part of my brain will keep plugging away at it. Thanks - this was fun! :)
Friday, November 13, 2009
Friday, October 16, 2009
The Nature of Probability
"To throw the dice is to face that which is given by the gods, by powers higher than human. It is to face reality at its most mysterious, like standing unflinching before the thunderstorm."
"Yet the human spirit is restless and nature forever compliant, willing to answer as yet undreamed questions, capable of opening up vast new vistas, revealing still undisclosed parts of her being."
(This one's for the newbie, because I think you are in danger of 'getting it'. ;)
"Probability is a way of expressing knowledge or belief that an event will occur or has occurred...
The word probability does not have a consistent direct definition. In fact, there are two broad categories of probability interpretations, whose adherents possess different (and sometimes conflicting) views about the fundamental nature of probability:
Frequentists talk about probabilities only when dealing with experiments that are random and well-defined. The probability of a random event denotes the relative frequency of occurrence of an experiment's outcome, when repeating the experiment. Frequentists consider probability to be the relative frequency "in the long run" of outcomes.[1]
Bayesians, however, assign probabilities to any statement whatsoever, even when no random process is involved. Probability, for a Bayesian, is a way to represent an individual's degree of belief in a statement, given the evidence." (q)
I would submit that there is a third category of probability interpretation. Like Bayesian probability, it deals with the subjective aspect of knowledge, and it updates as the relative 'force' of a particular piece of knowledge or observation changes in the cognitive space.
But this third category of probability interpretation is entirely concerned with predicting state selection. (By the time I'm done with this post I'll have a catchy name for it.) Not traditional state selection though, which is still presumed to reflect a collapse of the wave function in an objective reality. This third category of probability (subjective state selection probability?) is not concerned with using subjective knowledge to measure the potential for a particular decision or action, but rather with using subjective elements of conscious experience to predict future elements of that same cognitive space. To emphasize how such a definition of probability would be superior to other definitions in predicting outcomes, it becomes necessary to compare them in a common context - hence my emphasis on situations involving 'randomness'.
When faced with a coin flip, classical probability will attempt to predict the outcome of the flip (heads or tails), using only information about any potential bias the coin may have. To be honest, I'm not entirely sure what Bayesians would be attempting to do in this situation. Quantify how they should update their beliefs regarding the bias of the coin?
Those of us using cognitive differential probability will attempt to quantify aspects of our cognition with respect to the as-yet-unobserved outcome in an attempt to predict and/or modify the biases governing the selection of the outcome state. Here's where things get different.
Cognitive differential probability (are we liking that name?) recognizes that different amounts of knowledge about the coin, as well as different attachments to the outcome of the coin flip and other elements of anticipation regarding the outcome of the coin flip, constitute biases in the process of selecting the final state of the flipped coin. Early on in the game I talked about the differences in these knowledge structures. The biases we're talking about change with each additional observation, but they also change in response to various elements of pure cognition. (I know - you either believe it's possible, or you don't.)
I think Bayesian's recognize the constantly-updating weight of subjective beliefs, but I don't think they are using it to predict outcomes in this way. Nor do I think that they model 'beliefs' as comprehensively as I propose to model cognitive space. So where does this leave us?
Once I can figure out how to model degree of overlap in the various elements of cognitive space/representation, modified bi-directionally across time, with suitable boundary conditions to prevent infinite regression yet still accurately predict outcome selection... Once I can do that for a single observer, I'll be back to haunt you. ;)
"Yet the human spirit is restless and nature forever compliant, willing to answer as yet undreamed questions, capable of opening up vast new vistas, revealing still undisclosed parts of her being."
(This one's for the newbie, because I think you are in danger of 'getting it'. ;)
"Probability is a way of expressing knowledge or belief that an event will occur or has occurred...
The word probability does not have a consistent direct definition. In fact, there are two broad categories of probability interpretations, whose adherents possess different (and sometimes conflicting) views about the fundamental nature of probability:
Frequentists talk about probabilities only when dealing with experiments that are random and well-defined. The probability of a random event denotes the relative frequency of occurrence of an experiment's outcome, when repeating the experiment. Frequentists consider probability to be the relative frequency "in the long run" of outcomes.[1]
Bayesians, however, assign probabilities to any statement whatsoever, even when no random process is involved. Probability, for a Bayesian, is a way to represent an individual's degree of belief in a statement, given the evidence." (q)
I would submit that there is a third category of probability interpretation. Like Bayesian probability, it deals with the subjective aspect of knowledge, and it updates as the relative 'force' of a particular piece of knowledge or observation changes in the cognitive space.
But this third category of probability interpretation is entirely concerned with predicting state selection. (By the time I'm done with this post I'll have a catchy name for it.) Not traditional state selection though, which is still presumed to reflect a collapse of the wave function in an objective reality. This third category of probability (subjective state selection probability?) is not concerned with using subjective knowledge to measure the potential for a particular decision or action, but rather with using subjective elements of conscious experience to predict future elements of that same cognitive space. To emphasize how such a definition of probability would be superior to other definitions in predicting outcomes, it becomes necessary to compare them in a common context - hence my emphasis on situations involving 'randomness'.
When faced with a coin flip, classical probability will attempt to predict the outcome of the flip (heads or tails), using only information about any potential bias the coin may have. To be honest, I'm not entirely sure what Bayesians would be attempting to do in this situation. Quantify how they should update their beliefs regarding the bias of the coin?
Those of us using cognitive differential probability will attempt to quantify aspects of our cognition with respect to the as-yet-unobserved outcome in an attempt to predict and/or modify the biases governing the selection of the outcome state. Here's where things get different.
Cognitive differential probability (are we liking that name?) recognizes that different amounts of knowledge about the coin, as well as different attachments to the outcome of the coin flip and other elements of anticipation regarding the outcome of the coin flip, constitute biases in the process of selecting the final state of the flipped coin. Early on in the game I talked about the differences in these knowledge structures. The biases we're talking about change with each additional observation, but they also change in response to various elements of pure cognition. (I know - you either believe it's possible, or you don't.)
I think Bayesian's recognize the constantly-updating weight of subjective beliefs, but I don't think they are using it to predict outcomes in this way. Nor do I think that they model 'beliefs' as comprehensively as I propose to model cognitive space. So where does this leave us?
Once I can figure out how to model degree of overlap in the various elements of cognitive space/representation, modified bi-directionally across time, with suitable boundary conditions to prevent infinite regression yet still accurately predict outcome selection... Once I can do that for a single observer, I'll be back to haunt you. ;)
Sunday, October 11, 2009
I Have A Dream
"But for about 20 years now, a controversial area of scientific research has sought to determine whether a supernatural power, invoked through prayer and working alongside doctors, can cure illness... The research involves intercessory prayer, or people interceding on someone else's behalf..." - Can Prayer Help Heal?, Wisconsin State Journal, October 11, 2009.
I have a dream... that one day we will accept effects like this without resorting to God as a default cause. We will accept that 'prayers' are simply thoughts that anticipate future outcomes in a variety of ways, and we will understand outcome selection as a collective effort. "It's not about us controlling God"... but it is about us controlling the process of state selection.
I have a dream... that one day no one will dream of thinking that 'prayer' can replace modern medicine. I don't think that God is behind these effects, but I also don't think that you have enough control to trust the outcome of someone else's life to your thoughts.
I have a dream... that one day we will study these effects without thinking that the only purpose of such results is to prove or disprove the existence of God. I have a dream... that one day the skeptics will realize that it is neither necessary nor correct to accept randomness as the fundamental state of the universe. A dislike for religious explanations as the alternative to randomness should not push us into prematurely accepting that perceived randomness is not connected to thought.
I have a dream... that one day you will see what is right in front of you. You will understand how patients who have been told that they are being prayed for will develop different expectations about their outcomes. We will understand how to give patients information in a way that maximizes their ability to push themselves towards the outcome that they desire. And we will understand their right to choose whether or not we intercede in how we weave our thoughts around their problem.
I have a dream... that one day statements like this - "I think God and God alone chooses whether you have a miracle." - will be obsolete, not only because we will understand the effects of 'prayer' as something other than the intervention of a divine being, but also because once such things are understood, they will no longer be considered 'miracles'. One day God will no longer be the default explanation for all 'coincidental' or inconvenient occurrences.
I have a dream... that you will one day have the same epiphany that I had while counting bacteria in a microbiology lab. If my thoughts can determine whether I will find these bacteria alive or dead, then this entire endeavor is pointless and there is a "bigger problem" that I should be working on.
I have a dream... that one day we will find a meaning behind such a system, because I don't like the idea that life/conscious experience can be this capricious without a reason.
I have a dream... that someday one of you will want to talk with me about this idea, and together we will be able to expand this research.
I have a dream... that one day we will accept effects like this without resorting to God as a default cause. We will accept that 'prayers' are simply thoughts that anticipate future outcomes in a variety of ways, and we will understand outcome selection as a collective effort. "It's not about us controlling God"... but it is about us controlling the process of state selection.
I have a dream... that one day no one will dream of thinking that 'prayer' can replace modern medicine. I don't think that God is behind these effects, but I also don't think that you have enough control to trust the outcome of someone else's life to your thoughts.
I have a dream... that one day we will study these effects without thinking that the only purpose of such results is to prove or disprove the existence of God. I have a dream... that one day the skeptics will realize that it is neither necessary nor correct to accept randomness as the fundamental state of the universe. A dislike for religious explanations as the alternative to randomness should not push us into prematurely accepting that perceived randomness is not connected to thought.
I have a dream... that one day you will see what is right in front of you. You will understand how patients who have been told that they are being prayed for will develop different expectations about their outcomes. We will understand how to give patients information in a way that maximizes their ability to push themselves towards the outcome that they desire. And we will understand their right to choose whether or not we intercede in how we weave our thoughts around their problem.
I have a dream... that one day statements like this - "I think God and God alone chooses whether you have a miracle." - will be obsolete, not only because we will understand the effects of 'prayer' as something other than the intervention of a divine being, but also because once such things are understood, they will no longer be considered 'miracles'. One day God will no longer be the default explanation for all 'coincidental' or inconvenient occurrences.
I have a dream... that you will one day have the same epiphany that I had while counting bacteria in a microbiology lab. If my thoughts can determine whether I will find these bacteria alive or dead, then this entire endeavor is pointless and there is a "bigger problem" that I should be working on.
I have a dream... that one day we will find a meaning behind such a system, because I don't like the idea that life/conscious experience can be this capricious without a reason.
I have a dream... that someday one of you will want to talk with me about this idea, and together we will be able to expand this research.
Saturday, October 3, 2009
Measuring the Immeasurable
"Essential science and common sense keep coming back to data, fact and observation."
The fact that I can't lay my hands on this paper immediately now offends my hacker sensibilities. Grrr. (That paper would be Radin, D, & Atwater, F.H., Exploratory Evidence for Correlations Between Entrained Mental Coherence and Random Physical Systems, Journal of Scientific Exploration, Vol 23 (3), 2009, for the google search engines. ;)
And yet because I can't lay my hand on it, I can speculate about what the author actually did. (This is a fun game that all grad students should learn to play. Read only the abstract and flesh out what you think the paper should contain. Then go back and compare your notes to the actual paper. Worship anyone who exceeds your expectations.)
I'm temporarily suppressing my 'goody, goody gumdrops!' reaction to the topic in favor of my more prosaic 'please tell me you're going to discuss competing explanations of the phenomenon' reaction. If I were really, really in a good universe, the data collection and distribution processes would have been designed with various competing ideas about Observer Theories and multiple-observer dynamics in mind. (Yours truly is really looking forward to the day when we can stop 'proving' that there is an effect, and focus on using experimental data to tear apart competing theories.)
So here are the questions and ideas that came to mind when I read this abstract/blogpost.
Who were the subjects?
In an ideal study, they would be trained meditators who had been evaluated and matched on self-assessed descriptions of their meditative experiences, and various psychological parameters such as absorption. This helps ensure that any effects obtained are not the result of a single 'strongest' observer. Ideally such subjects could also be pretested to determine their individual ability to affect an RNG in the desired state of consciousness.
Depending on the nature of the research questions being asked, the groups of subjects would also vary in size, to attempt to correlate magnitude of effect with number of 'coherent' observers. The magnitude of effect achieved by trained meditators might also be compared to that achieved by untrained subjects with similar psychological parameters. (Previous studies suggests a straightforward prediction of 'more coherent observers = greater deviation from pure randomness'.)
What was done to eliminate/minimize an experimenter effect?
Let me try to be as clear as possible while not writing a separate treatise on the topic. At some point it's going to become necessary to establish that any observed deviation from randomness is not due to a time-displaced effect generated by the one observer who views all the critical data in a form most-suited for discriminating between outcome options - the experimenter. This is a minor point if you are trying to prove that there is an effect, but a relatively major one if you are trying to establish the source of the effect. (Click here for a discussion on taking the next step with observer theories. Awesome reference list for those who scroll all the way to the bottom of that page.)
Similar arguments could be made for various other ways of appropriately isolating the results/observations, but this can quickly get out of hand. It'll probably only be useful when the task is no longer to 'prove' the effect, but to model the data according to a specific theory/set of theories.
What does this mean - "An exploratory hypothesis predicted that fluctuations in entrained mental coherence associated with the workshop activities would modulate the random data recorded during the workshops." ? (My emphasis.)
I'm not that familiar with binaural-beat rhythms or the specific effects that they have on consciousness/brain waves/neurochemicals/etc. (Note to self: Read that paper.) I suspect that the author is using 'fluctuations' to refer to the difference between the 'coherent' and the (presumably) noncoherent times during the trial period, rather than to differences within the period of exposure to the binaural-beat rhythms itself. Although now I'm beginning to wonder how precisely the latter could be modelled...
What justifies this time frame? "Coherence was entrained by having groups listen to a prescribed series of binaural-beat rhythms during a 6-day workshop." (My emphasis, again.)
Is it necessary to use 6 days of data because the subjects were unfamiliar with the technique prior to the workshop? Was the same amount of data collected on all six days? If so, are there any differences in the data from 'coherent' periods in days 1 and 2, and the same periods in days 5 and 6?
"Random data were continually collected from these RNGs during 14 workshops." That's 84 days of data for each of 3 RNGs, and another 56 days of data for the control period. (Respectful pause for the enormity of the data set.) Now I wonder how robust any individual set of data would have been...
I'm beginning to suspect that this experiment took advantage of an existing program in order to collect data. Therefore, all the parameters that one might like to control or manipulate were not necessarily available for such control. (I could be wrong. You should, of course, read the actual paper.)
What, exactly, is being correlated, and why? "This was predicted to result in positive correlations between random data streams collected from one workshop to the next."
It sounds like data streams from the 'coherent' periods were correlated to each other, across workshops...
Now, I'm no expert on analyzing enormous sets of random numbers. I couldn't give you the algorithm for determining that one set of numbers is less 'random' than another, although it might be fun to see if I could derive one. But it seems like there is only one thing to compare...
So, is the degree of randomness being compared? And if so, why isn't the degree of randomness in the set of numbers from the 'coherent' period being compared to the degree of randomness in the set of numbers from the control period? I understand comparing data within device, within group, and within workshop. And I understand comparing 'coherent' periods to control periods. What I don't understand is what could be compared across workshop but within 'coherent' data? Especially if subjects were not matched on any critical parameters.
From the next sentence - "Results showed that during the workshops the overall correlation was positive, as predicted (p = .008); during control periods the same RNGs produced chance results (p = .74). " - it sounds like the correlation was between the time period (coherent/noncoherent) and the degree of randomness. Presumably, a lower degree of randomness was seen in the data during the 'coherent' times than was seen during the noncoherent times. (Now I also want to know the raw magnitude of the difference in degree of randomness.)
So what, if anything, was compared across workshop but within 'coherent' periods? Direction of deviation from random center point? Magnitude of deviation from randomness? Temporal location of deviations from randomness relative to 'coherence' activity?
I are confused. Even the espresso-laced truffles aren't enough to break the mental deadlock on this. If this becomes clearer to me at any point, I'll post an update.
[Aside: If you are a certain author of this paper, that was probably more than you ever wanted to hear from any single reader. Consider it payback for taunting me with the paper but not giving me the actual paper. ;) ]
Update 10/29/09 : I finally got my hack on and scored a copy of this article. (It's been a busy few weeks.) I didn't realize the extent to which this study would emphasize the "felt sense when individual thoughts and actions seem to merge into a single group thought or action." I can't speculate as to why the individuals' "subjective shift" towards the "same wavelength" might be an important element in the success of a collective effort to affect 'randomness'. Nor am I entirely sure why this cohesiveness is necessary, or qualitatively different than all participants simply focusing on the same outcome. According to the Global Consciousness Project, the simultaneous focused attention of large groups can produce similar effects. (And the thought of a "group mind" scares me a little bit.)
As anticipated, the subjects were participants in a workshop designed for other purposes.
As for my botched attempt to understand the data analysis... I had an entirely different picture of what the data stream from the RNGs would look like. In actuality, the data stream consisted on 0s and 1s. The authors predicted that these data streams "should have been modulated by mind in approximately the same way." This could mean one of two things - either the actual values produced by each RNG were the same (all 0s), or the change from one data point to the next was equally present (or not) in each data stream. (Ex: With starting values of 0,0,1, the next set of bits are 1,1,0. Each bit changed from its previous value.) At this point my understanding of the data analysis breaks down. Again. I understand the math, but not the rationale for choosing this method. I refer you to the original paper. Again.
There is one other issue that puzzles me - the belief of the authors that the physical proximity of the RNG machines to the subjects was somehow important to the results. "Data generated by the same RNGs run at distant locations... did not" show the same correlations that were evident in the close-proximity RNGs. In fact, from my perspective, there is no reason to expect such a difference, unless the information, feedback, or expectations were different between the close and the distant RNGs.
Though no attempt at discrimination was made between theories about the effects, the author did acknowledge the potential for experimenter effects. Given the authors' access to the data and expectations about the data, especially relative to those of the subjects (who were largely ignorant of the RNGs and the data that was being collected), I would argue that perhaps a good portion, if not all of the effect, was an experimenter effect. The fact that RNGs were placed at a distance for the test site, as well as in close proximity to the subjects, indicates that one or more of the authors may have had expectations of seeing a difference in the data based on physical distance. At this point, the test is to see if the experimenter can replicate the experiment but produce data that indicates positive correlations in the distant RNGs as well as the close RNGs, by will alone.
Tricky business, but I have a sneaking suspicion that a better understand of these effects will only come if we are much more precise in identifying who has the most information about the experiment, and his/her motivations and expectations for the experiment/outcome. (Yeah, I'm still looking for that poster I promised you earlier. ) For all we know at this point, my observation of this paper had a significant effect on the outcome. We would test that idea by providing the data/results to all interested parties simultaneously, and then destroying them to prevent future observations that might influence the outcome. This is the antithesis of academic publishing, but it's critical for testing/eliminating certain elements of 'retroactive-PK'.
If you haven't quit reading this post yet, then extra points for you, but really - what's wrong with you?! ;)
The fact that I can't lay my hands on this paper immediately now offends my hacker sensibilities. Grrr. (That paper would be Radin, D, & Atwater, F.H., Exploratory Evidence for Correlations Between Entrained Mental Coherence and Random Physical Systems, Journal of Scientific Exploration, Vol 23 (3), 2009, for the google search engines. ;)
And yet because I can't lay my hand on it, I can speculate about what the author actually did. (This is a fun game that all grad students should learn to play. Read only the abstract and flesh out what you think the paper should contain. Then go back and compare your notes to the actual paper. Worship anyone who exceeds your expectations.)
I'm temporarily suppressing my 'goody, goody gumdrops!' reaction to the topic in favor of my more prosaic 'please tell me you're going to discuss competing explanations of the phenomenon' reaction. If I were really, really in a good universe, the data collection and distribution processes would have been designed with various competing ideas about Observer Theories and multiple-observer dynamics in mind. (Yours truly is really looking forward to the day when we can stop 'proving' that there is an effect, and focus on using experimental data to tear apart competing theories.)
So here are the questions and ideas that came to mind when I read this abstract/blogpost.
Who were the subjects?
In an ideal study, they would be trained meditators who had been evaluated and matched on self-assessed descriptions of their meditative experiences, and various psychological parameters such as absorption. This helps ensure that any effects obtained are not the result of a single 'strongest' observer. Ideally such subjects could also be pretested to determine their individual ability to affect an RNG in the desired state of consciousness.
Depending on the nature of the research questions being asked, the groups of subjects would also vary in size, to attempt to correlate magnitude of effect with number of 'coherent' observers. The magnitude of effect achieved by trained meditators might also be compared to that achieved by untrained subjects with similar psychological parameters. (Previous studies suggests a straightforward prediction of 'more coherent observers = greater deviation from pure randomness'.)
What was done to eliminate/minimize an experimenter effect?
Let me try to be as clear as possible while not writing a separate treatise on the topic. At some point it's going to become necessary to establish that any observed deviation from randomness is not due to a time-displaced effect generated by the one observer who views all the critical data in a form most-suited for discriminating between outcome options - the experimenter. This is a minor point if you are trying to prove that there is an effect, but a relatively major one if you are trying to establish the source of the effect. (Click here for a discussion on taking the next step with observer theories. Awesome reference list for those who scroll all the way to the bottom of that page.)
Similar arguments could be made for various other ways of appropriately isolating the results/observations, but this can quickly get out of hand. It'll probably only be useful when the task is no longer to 'prove' the effect, but to model the data according to a specific theory/set of theories.
What does this mean - "An exploratory hypothesis predicted that fluctuations in entrained mental coherence associated with the workshop activities would modulate the random data recorded during the workshops." ? (My emphasis.)
I'm not that familiar with binaural-beat rhythms or the specific effects that they have on consciousness/brain waves/neurochemicals/etc. (Note to self: Read that paper.) I suspect that the author is using 'fluctuations' to refer to the difference between the 'coherent' and the (presumably) noncoherent times during the trial period, rather than to differences within the period of exposure to the binaural-beat rhythms itself. Although now I'm beginning to wonder how precisely the latter could be modelled...
What justifies this time frame? "Coherence was entrained by having groups listen to a prescribed series of binaural-beat rhythms during a 6-day workshop." (My emphasis, again.)
Is it necessary to use 6 days of data because the subjects were unfamiliar with the technique prior to the workshop? Was the same amount of data collected on all six days? If so, are there any differences in the data from 'coherent' periods in days 1 and 2, and the same periods in days 5 and 6?
"Random data were continually collected from these RNGs during 14 workshops." That's 84 days of data for each of 3 RNGs, and another 56 days of data for the control period. (Respectful pause for the enormity of the data set.) Now I wonder how robust any individual set of data would have been...
I'm beginning to suspect that this experiment took advantage of an existing program in order to collect data. Therefore, all the parameters that one might like to control or manipulate were not necessarily available for such control. (I could be wrong. You should, of course, read the actual paper.)
What, exactly, is being correlated, and why? "This was predicted to result in positive correlations between random data streams collected from one workshop to the next."
It sounds like data streams from the 'coherent' periods were correlated to each other, across workshops...
Now, I'm no expert on analyzing enormous sets of random numbers. I couldn't give you the algorithm for determining that one set of numbers is less 'random' than another, although it might be fun to see if I could derive one. But it seems like there is only one thing to compare...
So, is the degree of randomness being compared? And if so, why isn't the degree of randomness in the set of numbers from the 'coherent' period being compared to the degree of randomness in the set of numbers from the control period? I understand comparing data within device, within group, and within workshop. And I understand comparing 'coherent' periods to control periods. What I don't understand is what could be compared across workshop but within 'coherent' data? Especially if subjects were not matched on any critical parameters.
From the next sentence - "Results showed that during the workshops the overall correlation was positive, as predicted (p = .008); during control periods the same RNGs produced chance results (p = .74). " - it sounds like the correlation was between the time period (coherent/noncoherent) and the degree of randomness. Presumably, a lower degree of randomness was seen in the data during the 'coherent' times than was seen during the noncoherent times. (Now I also want to know the raw magnitude of the difference in degree of randomness.)
So what, if anything, was compared across workshop but within 'coherent' periods? Direction of deviation from random center point? Magnitude of deviation from randomness? Temporal location of deviations from randomness relative to 'coherence' activity?
I are confused. Even the espresso-laced truffles aren't enough to break the mental deadlock on this. If this becomes clearer to me at any point, I'll post an update.
[Aside: If you are a certain author of this paper, that was probably more than you ever wanted to hear from any single reader. Consider it payback for taunting me with the paper but not giving me the actual paper. ;) ]
Update 10/29/09 : I finally got my hack on and scored a copy of this article. (It's been a busy few weeks.) I didn't realize the extent to which this study would emphasize the "felt sense when individual thoughts and actions seem to merge into a single group thought or action." I can't speculate as to why the individuals' "subjective shift" towards the "same wavelength" might be an important element in the success of a collective effort to affect 'randomness'. Nor am I entirely sure why this cohesiveness is necessary, or qualitatively different than all participants simply focusing on the same outcome. According to the Global Consciousness Project, the simultaneous focused attention of large groups can produce similar effects. (And the thought of a "group mind" scares me a little bit.)
As anticipated, the subjects were participants in a workshop designed for other purposes.
As for my botched attempt to understand the data analysis... I had an entirely different picture of what the data stream from the RNGs would look like. In actuality, the data stream consisted on 0s and 1s. The authors predicted that these data streams "should have been modulated by mind in approximately the same way." This could mean one of two things - either the actual values produced by each RNG were the same (all 0s), or the change from one data point to the next was equally present (or not) in each data stream. (Ex: With starting values of 0,0,1, the next set of bits are 1,1,0. Each bit changed from its previous value.) At this point my understanding of the data analysis breaks down. Again. I understand the math, but not the rationale for choosing this method. I refer you to the original paper. Again.
There is one other issue that puzzles me - the belief of the authors that the physical proximity of the RNG machines to the subjects was somehow important to the results. "Data generated by the same RNGs run at distant locations... did not" show the same correlations that were evident in the close-proximity RNGs. In fact, from my perspective, there is no reason to expect such a difference, unless the information, feedback, or expectations were different between the close and the distant RNGs.
Though no attempt at discrimination was made between theories about the effects, the author did acknowledge the potential for experimenter effects. Given the authors' access to the data and expectations about the data, especially relative to those of the subjects (who were largely ignorant of the RNGs and the data that was being collected), I would argue that perhaps a good portion, if not all of the effect, was an experimenter effect. The fact that RNGs were placed at a distance for the test site, as well as in close proximity to the subjects, indicates that one or more of the authors may have had expectations of seeing a difference in the data based on physical distance. At this point, the test is to see if the experimenter can replicate the experiment but produce data that indicates positive correlations in the distant RNGs as well as the close RNGs, by will alone.
Tricky business, but I have a sneaking suspicion that a better understand of these effects will only come if we are much more precise in identifying who has the most information about the experiment, and his/her motivations and expectations for the experiment/outcome. (Yeah, I'm still looking for that poster I promised you earlier. ) For all we know at this point, my observation of this paper had a significant effect on the outcome. We would test that idea by providing the data/results to all interested parties simultaneously, and then destroying them to prevent future observations that might influence the outcome. This is the antithesis of academic publishing, but it's critical for testing/eliminating certain elements of 'retroactive-PK'.
If you haven't quit reading this post yet, then extra points for you, but really - what's wrong with you?! ;)
Wednesday, June 17, 2009
In Search of Time
"Is time nothing more than change? Or is time more fundamental - is it the mysterious entity that makes change possible, a kind of foundation on which the universe is built? Or is it just the opposite: as much as we like to speak of the 'river of time', could the river be dry, its flow an illusion? (And how can it flow if it is meaningless to speak about the rate at which it flows?)" - Dan Falk, In Search of Time (2008), p. 273.
Reading this book (which I really enjoyed) brought to mind our friend Simon the physicist and the nature of time in his hypothetical universe. So as to not admit such tendencies in myself on the record, we'll also introduce the fact that Simon the physicist likes to drive fast. He also likes to be the first one to respond to the green light after being stopped at an intersection. It's a weird compulsion that he has and we're not sure where it comes from. But he's very good at being the first one to hit the gas once a traffic light turns green. He's so good at this that he has sometimes wondered why he is so much faster than everyone else at responding to the green light...
I won't force Simon the physicist to take responsibility for the thoughts that follow, as they come from a distinctly-nonphysicist perspective. And I will credit Falk for putting together so many engaging ideas in such close proximity in his book, many of which prompted long chains of interesting thoughts. What follows is one of them, and may be total crap, but I sure had fun piecing it together.
Perhaps the biggest 'stop and think' idea I ran into while reading this book was the idea that "light, too, can affect time. Light carries energy, and Einstein had shown that mass and energy are equivalent - so light should also be able to warp space and time." (p. 181) Being conditioned to view our perception of light (and time and space) as functions of neural activity, my thoughts jumped to the path that a photon triggers once it hits the retina. Around this same time I was also reminded (by I remember not what) of the idea that processing speed may play a role in cognitive differences.
If neural signalling is ultimately the result of a transfer of energy, then the initial energy of the stimulus (photons) is quickly diminished or amplified by the unique dynamics of an individual's neural pathways - beginning as soon as the photon hits the first layer of cells in the retina. It follows then (without too much difficulty) that individual differences in neural/neurochemical dynamics affect how quickly something is perceived and/or reacted to. Neural density, differences in neural pathway configuration, and differing concentrations of neurochemicals may all impact the speed at which the original signal (photon) registers in conscious awareness. (This presumes that much of the early processing of the signal is not available to our conscious awareness: a view which is widely accepted and supported.)
So the question becomes... If our rate of perception is variable (even in the slightest degree), then can we not also reasonably say that time (or the rate at which we perceive change) moves at a different rate for each observer? And if time moves at a relative rate for each observer, then how/where can we say that time is an absolute feature of the universe?
But "clearly time appears to exist" (q) saith the physicist. Yes, but does it move at the same speed for everyone? And if it doesn't, then how do you and I reach an agreement that we both saw the same green light and I just smoked your [deleted]? What is present that underlies that observation? Do I functionally exist ahead of you in time if I can process and react to stimuli faster than you?
At this point the web of thoughts became tangled as I tried to connect special relativity to a model of multiple-observer dynamics. Trust me, you don't want me to go there. But do ponder that last question the next time you get smoked at an intersection. :)
And yes, I realize that I'm using a model/description that assumes an arrow of time in order to talk about how time doesn't exist in an absolute sense. It's interesting to think about how the effects of the reverse arrow of time might manifest themselves in that same model, but that's a topic for another post.
Reading this book (which I really enjoyed) brought to mind our friend Simon the physicist and the nature of time in his hypothetical universe. So as to not admit such tendencies in myself on the record, we'll also introduce the fact that Simon the physicist likes to drive fast. He also likes to be the first one to respond to the green light after being stopped at an intersection. It's a weird compulsion that he has and we're not sure where it comes from. But he's very good at being the first one to hit the gas once a traffic light turns green. He's so good at this that he has sometimes wondered why he is so much faster than everyone else at responding to the green light...
I won't force Simon the physicist to take responsibility for the thoughts that follow, as they come from a distinctly-nonphysicist perspective. And I will credit Falk for putting together so many engaging ideas in such close proximity in his book, many of which prompted long chains of interesting thoughts. What follows is one of them, and may be total crap, but I sure had fun piecing it together.
Perhaps the biggest 'stop and think' idea I ran into while reading this book was the idea that "light, too, can affect time. Light carries energy, and Einstein had shown that mass and energy are equivalent - so light should also be able to warp space and time." (p. 181) Being conditioned to view our perception of light (and time and space) as functions of neural activity, my thoughts jumped to the path that a photon triggers once it hits the retina. Around this same time I was also reminded (by I remember not what) of the idea that processing speed may play a role in cognitive differences.
If neural signalling is ultimately the result of a transfer of energy, then the initial energy of the stimulus (photons) is quickly diminished or amplified by the unique dynamics of an individual's neural pathways - beginning as soon as the photon hits the first layer of cells in the retina. It follows then (without too much difficulty) that individual differences in neural/neurochemical dynamics affect how quickly something is perceived and/or reacted to. Neural density, differences in neural pathway configuration, and differing concentrations of neurochemicals may all impact the speed at which the original signal (photon) registers in conscious awareness. (This presumes that much of the early processing of the signal is not available to our conscious awareness: a view which is widely accepted and supported.)
So the question becomes... If our rate of perception is variable (even in the slightest degree), then can we not also reasonably say that time (or the rate at which we perceive change) moves at a different rate for each observer? And if time moves at a relative rate for each observer, then how/where can we say that time is an absolute feature of the universe?
But "clearly time appears to exist" (q) saith the physicist. Yes, but does it move at the same speed for everyone? And if it doesn't, then how do you and I reach an agreement that we both saw the same green light and I just smoked your [deleted]? What is present that underlies that observation? Do I functionally exist ahead of you in time if I can process and react to stimuli faster than you?
At this point the web of thoughts became tangled as I tried to connect special relativity to a model of multiple-observer dynamics. Trust me, you don't want me to go there. But do ponder that last question the next time you get smoked at an intersection. :)
And yes, I realize that I'm using a model/description that assumes an arrow of time in order to talk about how time doesn't exist in an absolute sense. It's interesting to think about how the effects of the reverse arrow of time might manifest themselves in that same model, but that's a topic for another post.
Tuesday, March 10, 2009
The Illusion of Knowledge
"You have no responsibility to live up to what other people think you ought to accomplish. I have no responsibility to be like they expect me to be: it's their mistake, not my failing."
I have no illusions about the state of my knowledge of physics. Unfortunately, the same cannot be said of others. Like my friend, who emailed me today with the following question - "Can you explain this to me?"
Umm...
My fault to be sure, for having mentioned quantum physics in conjunction with what I was working on, to the chagrin of actual physicists everywhere. But I love a challenge, and I'm feeling feisty. (Just yesterday I learned how to read a basic space-time diagram of particle interactions. Not to be confused with a Feynman diagram, apparently, though I'm not entirely sure why.) Relatively speaking, I've got a better chance of being able to explain this article than most people, so why not...
Naturally, the preferred method for explaining such things is to refer the questioner to someone who has already explained it better than you could. (Preferably with pictures.) But where's the fun in that?
The first thing to do is to understand Hardy's paradox. In the absence of an article from our usual source - the almighty Wikipedia - we are forced to stray into press releases and blog postings to get our bearings. Hardy's paradox comes from a thought experiment that applies the fundamental tenet of quantum theory - an unobserved particle existing in a superposition of all possible positions - to a particle-antiparticle collision. Hardy reasoned that the attempts to create such a collision (see picture) might result in the particle and antiparticle disturbing each other without actually annihilating each other (as they are required to do by definition) due to their respective half-in half-out quantum states of being. (Curious minds stop to ponder what 'disturb but not annihilate' looks like...)
Hardy's design was previously thought to be untestable, as attempting to measure this 'disturbance' was itself a disturbance. That is, until the advent of interaction-free measurement or weak measurement, which itself violates a basic tenet of quantum physics - that the measurement of quantum systems (systems in a superposition of possible states) fundamentally alters those systems causing them to collapse "back to some kind of normality" (a single state). This kind of 'weak' measurement utilizes a measurement interval which is smaller than the inherent level of uncertainty about the properties of the particle. This means that you don't really know what you've got for any single measurement, but in theory you are able to deduce things from the average of such measurements repeated many times.
Your article reports on a modified test of Hardy's paradox, which used photons instead of particles and antiparticles. (Photons are their own antiparticles.) The claim is that physicists were able measure the system without really measuring it, and can therefore draw conclusions about the real (quantum) state of reality. Actually, this same experiment has been done twice by different groups/labs using the same techniques. And the weirdness is that they found regions which had fewer than zero particles in them. "Fewer than zero particles being present usually means that you have antiparticles instead." But photons are their own antiparticle, so what's going on? The analogy is made to Hardy's improbable hypothetical outcome of particle and antiparticles which disturb but fail to annihilate one another. But other than a shared sense of weirdness - "It looks impossible. But then I realised it was the only way to see it. It's beautiful." (here) - I'm not sure how the analogy applies.
Mind you, I don't have the source articles, so there's a good chance I'm missing something. But as far as I can tell, the point is basically that "there is a way to carry out experiments on the counter-intuitive predictions of quantum theory without destroying all the interesting results" and that "there are extraordinary things within ordinary quantum mechanics."
(That was actually fun! Bring it on!)
Now if you were asking if I can explain what it means, or how it fits with my idea... (sigh)
I have no illusions about the state of my knowledge of physics. Unfortunately, the same cannot be said of others. Like my friend, who emailed me today with the following question - "Can you explain this to me?"
Umm...
My fault to be sure, for having mentioned quantum physics in conjunction with what I was working on, to the chagrin of actual physicists everywhere. But I love a challenge, and I'm feeling feisty. (Just yesterday I learned how to read a basic space-time diagram of particle interactions. Not to be confused with a Feynman diagram, apparently, though I'm not entirely sure why.) Relatively speaking, I've got a better chance of being able to explain this article than most people, so why not...
Naturally, the preferred method for explaining such things is to refer the questioner to someone who has already explained it better than you could. (Preferably with pictures.) But where's the fun in that?
The first thing to do is to understand Hardy's paradox. In the absence of an article from our usual source - the almighty Wikipedia - we are forced to stray into press releases and blog postings to get our bearings. Hardy's paradox comes from a thought experiment that applies the fundamental tenet of quantum theory - an unobserved particle existing in a superposition of all possible positions - to a particle-antiparticle collision. Hardy reasoned that the attempts to create such a collision (see picture) might result in the particle and antiparticle disturbing each other without actually annihilating each other (as they are required to do by definition) due to their respective half-in half-out quantum states of being. (Curious minds stop to ponder what 'disturb but not annihilate' looks like...)
Hardy's design was previously thought to be untestable, as attempting to measure this 'disturbance' was itself a disturbance. That is, until the advent of interaction-free measurement or weak measurement, which itself violates a basic tenet of quantum physics - that the measurement of quantum systems (systems in a superposition of possible states) fundamentally alters those systems causing them to collapse "back to some kind of normality" (a single state). This kind of 'weak' measurement utilizes a measurement interval which is smaller than the inherent level of uncertainty about the properties of the particle. This means that you don't really know what you've got for any single measurement, but in theory you are able to deduce things from the average of such measurements repeated many times.
Your article reports on a modified test of Hardy's paradox, which used photons instead of particles and antiparticles. (Photons are their own antiparticles.) The claim is that physicists were able measure the system without really measuring it, and can therefore draw conclusions about the real (quantum) state of reality. Actually, this same experiment has been done twice by different groups/labs using the same techniques. And the weirdness is that they found regions which had fewer than zero particles in them. "Fewer than zero particles being present usually means that you have antiparticles instead." But photons are their own antiparticle, so what's going on? The analogy is made to Hardy's improbable hypothetical outcome of particle and antiparticles which disturb but fail to annihilate one another. But other than a shared sense of weirdness - "It looks impossible. But then I realised it was the only way to see it. It's beautiful." (here) - I'm not sure how the analogy applies.
Mind you, I don't have the source articles, so there's a good chance I'm missing something. But as far as I can tell, the point is basically that "there is a way to carry out experiments on the counter-intuitive predictions of quantum theory without destroying all the interesting results" and that "there are extraordinary things within ordinary quantum mechanics."
(That was actually fun! Bring it on!)
Now if you were asking if I can explain what it means, or how it fits with my idea... (sigh)
Friday, March 6, 2009
The Age of Entanglement
I am but a tool of the Wikiether. (That will be in a science fiction book one day. Watch for it. ;)
It takes a lot for me to fire up the computer on a Friday night, especially when I had other plans.
Don't get excited. This is probably complete and utter crap. But when something clicks (or appears to click), you listen. And then you write it down.
It has occurred to me that eventually observations of entangled behavior would have to be accounted for by the 5-dimensional model. (I'm skipping words as I type. This is not a good sign.) Tonight it occurred to me that the answer might really be simple after all.
What if observed entanglement behavior is nothing more than a reflection of the transfer or replication of the bias for state selection from the representation of one object to the representation of another?
This idea would have to be supported by massive parallels between the neurophysics of knowledge representation and the known observations of entanglement creation and destruction. We established in our last post that the creation of entanglement is a bizarre process, one which is apparently not as cut-and-dry as I previously believed it to be. When I search for information on the destruction of entanglement, I am delighted to find that there is data on this phenomenon. 'Entanglement Sudden Death' or ESD "can arise when two sources of environmental 'noise' act to disrupt an entangled state. Each source would individually induce a more gradual asymptotic decay, but in tandem they can trigger ESD." (here)
Several minutes elapse while I pursue the 2007 Almeida et al source article. (Via.) And therein I meet the concept of decoherence (again). "[Q]uantum decoherence is the mechanism by which quantum systems interact with their environments to exhibit probabilistically additive behavior... [and] gives the appearance of wave function collapse." (W) (Things click. I feel slightly wiser.) So decoherence is how we avoid the need for an actual wave function collapse, yes? More study is required on my part, I know, but for now it is enough to know that decoherence is a 'theoretical concept' and Almeida's attributed explanation for ESD - "The presence of decoherence in communication channels and computing devices, which stems from the unavoidable interaction between these systems and the environment, degrades the entanglement when the particles propagate or the computation evolves. Decoherence leads to local dynamics, associated with single-particle dissipation, diffusion, and decay, as well as to global dynamics, which may provoke the disappearance of entanglement at a finite time."
The question remains - Can the observed dynamics of entanglement and decoherence be mapped on to the dynamics of knowledge representation? Especially those dynamics which deal with the creation of associations and overlapping representations? This would require a detailed examination of the neural substrates of associative memory, though perhaps on a level that is not currently possible.
Now that I'm rolling on this line of thought... The stability of entanglement (or not) would be a reflection of the stability of the expectation/bias that the two entangled particles would behave as such. The cumulative state of the information about such an entanglement would be spread across multiple observers, and the displayed behavior between the particles would change in response to the shifting bias for state selection (of a particular observation) as anchored by the relevant set of observers.
What would falsify this idea? (The question you should always ask, even if you don't have a ready answer...) Hell if I know, as I barely have even the framework of this idea, let alone the data to support it. But I bet I'm going to lose sleep thinking about it... Damn.
(I warned you that this wasn't going to be pretty. ;)
It takes a lot for me to fire up the computer on a Friday night, especially when I had other plans.
Don't get excited. This is probably complete and utter crap. But when something clicks (or appears to click), you listen. And then you write it down.
It has occurred to me that eventually observations of entangled behavior would have to be accounted for by the 5-dimensional model. (I'm skipping words as I type. This is not a good sign.) Tonight it occurred to me that the answer might really be simple after all.
What if observed entanglement behavior is nothing more than a reflection of the transfer or replication of the bias for state selection from the representation of one object to the representation of another?
This idea would have to be supported by massive parallels between the neurophysics of knowledge representation and the known observations of entanglement creation and destruction. We established in our last post that the creation of entanglement is a bizarre process, one which is apparently not as cut-and-dry as I previously believed it to be. When I search for information on the destruction of entanglement, I am delighted to find that there is data on this phenomenon. 'Entanglement Sudden Death' or ESD "can arise when two sources of environmental 'noise' act to disrupt an entangled state. Each source would individually induce a more gradual asymptotic decay, but in tandem they can trigger ESD." (here)
Several minutes elapse while I pursue the 2007 Almeida et al source article. (Via.) And therein I meet the concept of decoherence (again). "[Q]uantum decoherence is the mechanism by which quantum systems interact with their environments to exhibit probabilistically additive behavior... [and] gives the appearance of wave function collapse." (W) (Things click. I feel slightly wiser.) So decoherence is how we avoid the need for an actual wave function collapse, yes? More study is required on my part, I know, but for now it is enough to know that decoherence is a 'theoretical concept' and Almeida's attributed explanation for ESD - "The presence of decoherence in communication channels and computing devices, which stems from the unavoidable interaction between these systems and the environment, degrades the entanglement when the particles propagate or the computation evolves. Decoherence leads to local dynamics, associated with single-particle dissipation, diffusion, and decay, as well as to global dynamics, which may provoke the disappearance of entanglement at a finite time."
The question remains - Can the observed dynamics of entanglement and decoherence be mapped on to the dynamics of knowledge representation? Especially those dynamics which deal with the creation of associations and overlapping representations? This would require a detailed examination of the neural substrates of associative memory, though perhaps on a level that is not currently possible.
Now that I'm rolling on this line of thought... The stability of entanglement (or not) would be a reflection of the stability of the expectation/bias that the two entangled particles would behave as such. The cumulative state of the information about such an entanglement would be spread across multiple observers, and the displayed behavior between the particles would change in response to the shifting bias for state selection (of a particular observation) as anchored by the relevant set of observers.
What would falsify this idea? (The question you should always ask, even if you don't have a ready answer...) Hell if I know, as I barely have even the framework of this idea, let alone the data to support it. But I bet I'm going to lose sleep thinking about it... Damn.
(I warned you that this wasn't going to be pretty. ;)
Subscribe to:
Posts (Atom)