Laboratory Report: The Sense of Being Stared at

Assessment Task 1: Laboratory Report: The Sense of Being Stared at  Due Date: 24th August, 2012  Details of task: Do we know when we’re being watched? Some parapsychologists, e.g., Rupert Sheldrake [see Sheldrake (2003) in your readings], suggest that we do and that this can be tested – at least if we accept the statistical significance level of .05 as evidence, and most scientists do accept this.   The laboratory exercise: Quantitative date collection  Following Sheldrake’s method, we’ll attempt a replication of a study in which the receiver (someone you recruit) sits in a chair with their back turned to you, the sender. You will then (1) toss a coin having previously decided whether a head means “stare at the back of the receiver’s head for 10 seconds” and a tail means “look away from the back of the receiver’s head for 10 seconds”, or vice versa, and (2) after saying “OK” to indicate the beginning of a trial, stare at (or look away from) the back of the receiver’s head for 10 seconds for a total of 40 trials, and after each trial ask the receiver to say “yes or no” where “yes” means that the receiver thinks they were being stared at by you, and no means the receiver thinks they were not being stared at.

Note: As you approach 40 trials, you may have to “fix” the last few trials without coin tosses so that you end up with 20 trials of each of the staring and non-staring kind. Recruit at least 4 receivers if you can, preferably two of each gender, but this is not essential.   The receivers’ responses (Y or N) should be noted on the data sheet under the R (Receiver) column next to each Y or N under the S (Sender) column, where Y means the sender was staring at the receiver’s head for 10 seconds and N means the sender was staring away (at least 90 degrees away) for 10 seconds. Nobody else should be staring at the back of the receiver’s head by the way, and make sure there are no reflections or other cues! The S, R, and comments columns should be completed while running the trials and the other columns later. Here is an example data sheet over 10 trials (instead of 40) to give you an idea of what’s expected (the Questionnaire summary under the table is explained later).

 

S (Sender) R (Receiver) HIT (correct) NON HIT (error) Comments  Y Y X Y N X N N X N Y X N N X Y Y X Slow response  N N X Y N X Changed mind  Y Y X N Y X Total 10 6 4

Summary of Questionnaire Answer 1: Male 2: Yes 3: No 4: Logical

A HIT consists of a Y under the S and R columns (suggesting that the sender may have been sending and the receiver receiving successfully on that trial), or an N under each column suggesting correct detection of not being stared at. A NON HIT (error) occurs when there is a mismatch between the Y and N in the two S and R columns. These example data (6 hits and 4 errors) are well within what we’d expect by chance over 10 trials and we’d accept the null hypothesis if these were the data we had, but if this trend continued over 40 trials it could be a different story, and if over 400 trials 60% were hits the story would be different again. You’ll need to refer to your notes on statistics to understand the relationship between “n” (number of trials) and the probability of various outcomes. You will receive some specific notes on analysis of your data later.   Qualitative data collection  Because there is some evidence relating certain paranormal sensitivities to human individual differences, we need to be able to correlate HIT (and error) rates with such differences. A short questionnaire will be given to each receiver (see personal details questionnaire below) but no names are needed so privacy is not an issue. A summary of the receiver’s answers should be given in the spaces at the bottom of the data sheet.

Please return to George by Friday 3rd August, 2012. Scanning the sheets and emailing them is the best option.   Analyses of data and report writing  Shortly after I have received your 4 data sheets (questionnaire results at the bottom), all students will receive (on Moodle) a compilation of them along with instructions for how to prepare the report.   Discussion of results  Before knowing the class results, all students should consider what issues might be included in a discussion of them. Here are some topics that might be relevant and worth reading about.  1. Were there opportunities for deliberate faking, or attempts to mislead in this study? If so, can you think of ways to address or deal with them?  2. Is a statistical analysis that compares observed data with what chance would predict adequate, or is more needed to show a real effect?  3. Are there better ways of testing whether we can tell we’re being watched? For example, do we need to find ways that reduce the artificiality of “laboratory” tests, or “ESP” on demand? Are there ways to test the “being watched” phenomenon in an environment more conducive to its appearance?  4. Are “weak effects” requiring very high numbers of observations for their appearance worth taking seriously?  5. Excluding fraud what arguments would strong skeptics be likely to use to refute any findings that suggest we know when we’re being watched?   This list is neither “compulsory” nor exhaustive. It is intended to give you an idea of the kinds of issues that are relevant in a discussion, in addition to what we actually find in the analysis.   I would recommend looking at the articles on Rupert Sheldrake’s web site: http://www.sheldrake.org/papers/   They will provide an excellent source of relevant information and you will find articles by other authors that are of interest too!

 

 

 

 

 

 

 

 

 

 

 

 

DATA SHEET: No# ( ) to be completed by students and returned by email.   Trial S (Sender) R (Receiver) HITS N –HITS(errors) Comments  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Total

Summary of Q’nnaire Ans. 1: 2: 3: 4: Personal details questionnaire   Receiver No. _______ (no names please)   Please circle one answer for each question   1. Your gender?  Male Female   2. Do you believe that some people have psychic (paranormal) abilities?

Yes No   3. Do you believe that you have psychic abilities?   Yes No   4. Would you describe yourself as mainly artistic (interested in art, sculpture, music, or design, etc) or mainly logical (interested in mathematics, computers chess, reasoning, or science, etc)   Artistic Logical

 

Data analysis: Sense of being stared at report  There are at least two ways in which your data could be analysed. You must choose between the two analyses, and put your analysis of choice in the Results section of your assignment in APA format. The first uses a single-sample t-test.   Single-sample t-test  A single sample t-test divides an actually observed (experimentally obtained) number by a number that should be observed according to chance. This division yields a t ratio. If the resulting ratio is 1, there is no significant difference between what you observed and what chance predicted. For example, if the observed number is 20, and the expected number by chance is also 20, then t = 20/20 = 1. Your observed number has to be bigger than the number expected by chance to be significant.  Next, let’s substitute the phrase “difference between two numbers” for “numbers” in the previous paragraph. For example, a difference of 2 is obtained if you subtract 8 from 10. That is 10 – 8 = 2. The number 2 is the difference between 10 and 8, but it’s still a number. Let’s rewrite the first paragraph with “difference” where before we had “number”.  A single-sample t-test divides an observed difference by the difference we should observe according to chance (see example below). So now we have two differences to deal with – the one we observed and one we expected by chance. If the difference we observed is larger than the one we expected, our t ratio will be bigger than 1. It might be big enough to become significant. To be big enough to become significant, it must have a probability of 0.05 or less of happening by chance.  Now, let’s assume we had 131 subjects in our sense of being stared at experiment and each completed 40 trials. Since the probability of guessing correctly on any trial is 0.5 we would expect 0.5 (half) of the 40 trials to be hits. That is, we’d expect 0.5 x 40 = 20 hits.  Now suppose the mean number of hits over the 131 subjects was 21.41. This is the mean of all 131 means, one for each subject. In this case, the difference between the obtained mean of 21.41 and the expected mean is 21.41 – 20 = 1.41. This is our obtained difference.  To calculate the t ratio we must now divide this difference by the difference expected by chance. That is:   t = difference obtained / difference expected by chance.   How do we work out the difference expected by chance? The answer is that we take the standard deviation of the 131 scores (hits) and divide that by the square root of N, where N = the number of subjects, which is 131 in this case. (The standard deviation is a kind of average amount by which the 131 means we collected varied from their grand mean of 21.41).  Let’s assume that the standard deviation was calculated as 4.71 so the difference we would have expected by chance is 4.71 / ? 131

= 4.71 / 11.45 &nbsp;= 0.411 &nbsp; Now we can work out our t ratio which is &nbsp; 1.41 / .411 = 3.43 &nbsp; Now we need to work out the probability of getting a t ratio as big as 3.43. &nbsp;Just about any statistics book will tell you that you need a t value of 2.0 to reach the 0.05 level of significance in a two-tailed test (which simply means you allowed for an observed mean number of hits to be above or below the expected mean of 20 hits). &nbsp;Since our hypothetical obtained value of t = 3.43 is larger than the critical value of t = 2.00, we are entitled to conclude that a mean number of hits of 21.41 was not a chance event (p < .05). &nbsp;Notice how the size of N is important. Suppose we had the same grand mean (mean of all the means [explained above]) and standard deviation but only 20 subjects. The t value then becomes: &nbsp; (21.41 – 20) / (4.71 / ? 20) = 1.41 / (4.71 / 4.47) &nbsp;= 1.41 / 1.05 &nbsp;= 1.34 &nbsp; Since we needed a t of at least 2 to reach significance at the 0.05 level, the result is no longer significant. The smaller N has made the difference. &nbsp; Using the binomial test to calculate a Z value &nbsp;The formula: &nbsp; Z = (X – Np) / ? Npq &nbsp; Where &nbsp;X = number of hits &nbsp;N = number of trials &nbsp;p = probability correct on any trial &nbsp;q = 1-p

can be used to test whether the number of hits obtained is significantly different from the number of hits expected. The formula is similar to a single-sample t in that the numerator “X – Np” is the difference between the obtained number of hits and the number of hits expected by chance. The denominator “? Npq” is the expected difference according to chance. &nbsp;Now with 131 subjects doing 40 trials, there would be 5240 trials altogether. If we assume a 0.5 probability of guessing correctly on any trial, we’d expect 5240 x 0.5 = 2620 hits. The number of hits we obtained with our set of 131 participants was 2805. Is this difference significant?

&nbsp;

Using the formula to calculate Z we have &nbsp; Z = (2805 – 2620) / [? 5240 (0.5 x 0.5)] &nbsp;= 185 / ? 1310 &nbsp;= 185 / 36.19 &nbsp;= 5.1 &nbsp; You only need a Z score of 1.96 for significance at the 0.05 level (two-tailed) so a Z score of 5.1 is significant! &nbsp;Which method do you think is “better”? Remember that N has a profound effect on how significant a finding is and many critics of parapsychology argue that a tiny effect (in our case a mean of 21.41 hits when 20 is expected by chance) is artificially made to look significant by running many hundreds or thousands of trials. They argue that these tests are inappropriate on their own. &nbsp;In addition, the use of 5240 trials assumed some kind of equality or independence across trials. However, the fact that 131 different subjects had 40 trials each could violate such an assumption. Note that the example Irwin gives of this test (see p. 53) uses data from one subject. This kind of “non-independence” of data is a reason why many critics reject meta-analyses. These analyses combine data across many experiments as well as many subjects. &nbsp; Other data &nbsp;We will also do some correlation tests to see whether those who scored the highest number of hits, also tended to be female, believers, those seeing themselves as having paranormal ability, or having artistic tendencies. It is up to you whether or not you want to include these correlations in your report. You can choose to use some of the correlations and ignore others. &nbsp; Word limit: 1500 words &nbsp;Value: 30% &nbsp;Presentation requirements: &nbsp;Follow the format for writing a report – i.e., abstract, introduction, method, results, discussion and references. See “How to write a report” PPT, and “Guide to writing a report” in Unit Materials. &nbsp;Say what was done by the class as a whole (to best of your knowledge) and discuss the results in terms of other researchers’ findings and problems that may disqualify the apparent significance of the finding (e.g., the opportunity for fraud, inaccurate scoring, sensory leakage, inappropriate statistical tests, etc.). Take care to reference any claims you make using APA conventions. Provide a reference list in APA format at the end of your report. &nbsp;Prescribed text(s) and readings &nbsp;The textbook you are required to read for this subject is: &nbsp; Irwin, H., & Watt, C.A. (2008). An introduction to parapsychology (5th ed.). London: McFarland.

term papers to buy
research papers