Correcting for Multiple Comparisons in Studies of Dream Content: A Statistical Addition to the Hall/Van de Castle Coding System

G. William Domhoff & Adam Schneider

University of California, Santa Cruz

NOTE: If you use this paper in research, please use the following citation, as this on-line version is simply a reprint of the original article:
Domhoff, G. W., & Schneider, A. (2015). Correcting for multiple comparisons in studies of dream content: A statistical addition to the Hall/Van de Castle coding system. Dreaming, 25, 59-69.


This article addresses the issue of potential false positives when multiple tests are carried out in comparing 2 samples with the content indicators used in the Hall and Van de Castle (1966) coding system for dream content. Using an algorithm based on the Benjamini-Hochberg (Benjamini & Hochberg, 1995) correction for the False Discovery Rate, it first compares findings for men and women in a large normative sample; all 12 of the statistically significant differences at the .05 level remain, along with 10 of 11 below .01. The article then compares results from an individual's dream series with the norms for women; 10 of the 12 significant p values at the .05 level remain, along with 4 of 5 below .01. The article concludes that earlier findings with the Hall-Van de Castle system can be viewed as established findings, and recommends that the correction formula be used with large samples.

Multiple comparisons of the same pair of samples can lead to major statistical problems in several scientific disciplines, including psychology, ecology, epidemiology, and others. The issue arises in studies of dream content, and perhaps especially with regard to the Hall and Van de Castle (1966) coding system for dream reports, because the many studies that have made use of it in a wide range of countries over the past 50 years sometimes include 18 or more tests of the same samples of dream reports (e.g., Karagianni et al., 2013; Mazandarani, Aguilar-Vafaie, & Domhoff, 2013; Németh & Bányai, 2011; Oberst, Charles, & Chamarro, 2005; Prasad, 1982; Strauch, 2005; Strauch & Meier, 1996; Yamanaka, Morita, & Matsumoto, 1982).

This article therefore examines the issue of multiple testing in using the Hall and Van de Castle (hereafter HVdC) coding system. Using an algorithm based on the Benjamini-Hochberg (Benjamini & Hochberg, 1995) formula for controlling the False Discovery Rate, it examines the results from two large-scale past studies that used the HVdC content indicators. It concludes that multiple tests do not present problems for past HVdC findings, which is consistent with results from several past replication studies. First, however, it will be useful to give an overview of the methods and statistics the system employs to deal with the numerous problems that have bedeviled the quantitative study of dream content since such studies began.

An overview of the Hall-Van de Castle coding system

The HVdC coding system includes 10 general categories and numerous subcategories that cover virtually every element in dream reports, ranging from characters, social interactions, activities, emotions, and physical surroundings to elements from the past, food and eating elements, and descriptive elements (modifiers, temporality, and negativity). A factor analysis carried out on the codings of 100 REM dream reports using several different empirical scales discovered that most content-analysis systems boil down to one or more of five basic dimensions, all of which are encompassed by one or more HVdC categories: (a) degree of vividness and distortion, (b) degree of anxiety and hostility, (c) degree of initiative and striving, (d) level of activity, and (e) amount of sexuality (Hauri, 1975). In addition, the HVdC categories can be combined in ways that replicate more complex scales that have uncertain reliability or validity (Domhoff, 2003, pp. 74-79).

The HVdC coding system rests on the categorical level of measurement, which is also called "nominal" or "binary" data; this approach avoids the serious reliability problems that plague many ordinal-level (rating) scales for dream content (Domhoff, 1996, pp. 31-34; 2003, pp. 57-60; Hall, 1969; Van de Castle, 1969). It has high reliability using the method of perfect agreement, which is a standard method for all types of content-analysis studies in the social sciences (Smith, 2000). This high level of reliability is for the most part attributable to the clarity of the rules for classifying elements, but also because of the availability of dream reports coded by Hall and Van de Castle that can be used to train new coders (Schneider & Domhoff, 1995, 1999).

Studies using the HVdC system report results as readily understood percentages and ratios, such as the "Animal Percent" (the percentage of all dream characters that are animals) and the aggressions per character (A/C) ratio, to deal with a critical issue: the need to correct for differences in the length of dream reports. The success of this adjustment is demonstrated in a study that used dream reports ranging from 50 to 500 words (Domhoff, 2003. pp. 79-83). Because of the distortions and mistakes that can be created by (a) skewed distributions and (b) nonrandom samples, both of which are very frequent in dream studies, the HVdC system uses the formula for the significance of differences between proportions to determine p values. This deceptively simple statistic is in fact a type of mean based on a distribution of values that are either zero or one. Thus, "the same kind of inferential issues" are involved with proportions as with means in general (Cohen, 1977, p. 179). Nothing would be gained by determining the mean number of characters or emotions or aggressions per dream, even if the calculation of means did not have problems due to the variable lengths of dream reports.

The use of proportions has added virtues. First, proportions actually provide the same results as a 2 2 chi-square analysis with data expressed in percentages (Domhoff, 2003, pp. 63-65; Reynolds, 1984). Proportions also lead seamlessly to the use of an effect size measure called Cohen's h (1988), which is similar in its general logic to Cohen's better-known d statistic for determining effect sizes based on means. It provides an effect size that is equivalent to phi and lambda, the two statistics used to determine effect sizes with chi-square (Ferguson, 1981; Reynolds, 1984). Furthermore, the magnitude of the difference between two proportions is equal to the Pearson r for dichotomous variables, so nothing is gained by using a correlational approach instead of proportions (Rosenthal & Rubin, 1982).

To complement the proportions statistic, the findings using HVdC content indicators also can be analyzed using computationally intensive approximate randomization strategies as an alternative for determining p values. This approach is useful because it provides a way to check on results with the proportions statistic; also, approximate randomization is appropriate for longitudinal studies of individual cases because there is no assumption of independence (Franklin, Allison, & Gorman, 1997). The randomization program on dreamresearch.net determines exact p values by pooling the data from both samples and then creating 1,000 or more new pairs of random subsamples. The p value is the proportion of times that the difference between a pair of randomly drawn subsamples is equal to or greater than the difference between the two original samples. When male and female codings for all the HVdC content indicators are compared, approximate randomization provides the same p values as the proportions test, except in the case of the Animal Percent, which has small frequencies for both men (6.0%) and women (4.0%) and a small difference between the two frequencies. This finding demonstrates that the formula for the significance of differences between two proportions provides accurate p values for the great bulk of the frequency distribution, but it also underscores the generally accepted point that the formula is not perfect when there are small differences between two indicators that are at either extreme of the frequency distribution, and perhaps especially with small sample sizes (Domhoff, 2003, pp. 85-87; Domhoff & Schneider, 2008).

The HVdC system includes normative findings for men and women based on samples of 500 dream reports for each gender. They were calculated by coding five reports that were randomly drawn from dream diaries containing from 12 to 18 dream reports; 100 men and 100 women provided these diaries over the course of a semester at Case Western Reserve University and Baldwin Wallace College in Cleveland between 1947 and 1950 (Hall & Van de Castle, 1966, p. 158). Several subsequent studies of subsamples of varying sizes determined that the normative findings are replicated exactly with 250 dream reports, fairly well with 125 dream reports, and perhaps adequately with 100 dream reports; however, the normative findings for only a few indicators can be replicated with 75 dream reports or less (Domhoff, 1996, pp. 64-67; 2003, pp. 92-94, 113-114; Domhoff & Schneider, 2008, pp. 1262-1264). A large number of dream reports is needed in any quantitative study of dream content, including HVdC studies, for two reasons: first, many dream elements (such as friendly and aggressive interactions) appear in less than half of dream reports; and second, effect sizes are generally small to medium in most dream content studies.

The HVdC normative findings were subsequently replicated for men and women at the University of Richmond in 1981, women at the University of California, Berkeley, in 1985, and women at Salem College in the late 1980s (Domhoff, 1996; Dudley & Fungaroli, 1987; Dudley & Swank, 1990; Hall, Domhoff, Blick, & Weesner, 1982; Tonay, 1990/1991). Similar findings emerged from studies in several other countries, and also in the United States with smaller sample sizes; these results are summarized in Domhoff (1996, Chapters 4 and 6). Of course, small discrepancies do appear; for example, in two different samples collected from college students in Germany, the percentage of male and female characters differed from the American-based norms in one study but was closer to those norms in the other (Paul & Schredl, 2012, p. 121, Table 2, and p. 122; Schredl, Petra, Bishop, Golitz, & Buschtons, 2003, p. 240, Table 2).

As rigorous as HVdC analyses have been since the statistical procedures were finalized in the mid-1990s, the system did not address the increasingly salient issue of multiple-test corrections. This problem is examined and remedied in the remainder of this article.

The issue of multiple testing

Multiple tests of the same pair of samples can greatly increase the probability of finding at least one statistically significant difference by chance. For example, if 10 comparisons are made, there is a 40% probability of at least one statistically significant difference (i.e., where p < .05) occurring just by chance, which can also be described as a 40% probability of a false positive , which is generally called a Type I error in statistics. With 20 tests, which is a plausible number when using the HVdC system, the probability of at least one false positive at the .05 level rises to 64%.

There are two general types of adjustments that are used to deal with the fact that the chances of false positives increase greatly when large numbers of comparisons are made. The first type uses the entire list of p values, whether all of them reach the level of statistical significance or not. In statistical terms, this type of correction is attempting to control the Familywise Error Rate, which refers to a set of tests that are related to each other by the fact that they have been used in analyzing the same dataset. The second type of adjustment procedure controls for false positives by focusing on the comparisons that yielded statistically significant differences. In statistical terms, it is concerned with determining the False Discovery Rate.

The Holm-Bonferroni (Holm, 1979) correction is the most frequently used method for controlling the Familywise Error Rate. Because of its use of all the p values computed in any given study in determining the denominator, this correction errs on the side of avoiding any false positives (Type I errors). In statistical terms, this leads to a large loss in the power to detect real differences (Type II errors). This large loss of power is highly criticized by many statisticians because it may lead to the rejection of potentially new discoveries (e.g., Cohen, 1988, 1990; Ellis, 2010). More generally, some epidemiologists, ecologists, and medical researchers question, and even reject, the idea of controlling for false-positive errors on the grounds that it stifles the further exploration of unexpected findings, particularly in fields that are primarily at a descriptive stage in the theory-building process (e.g., Moran, 2003; Perneger, 1998; Rothman, 1990). Instead, they advocate follow-up and replication studies of potentially valuable findings.

The Benjamini-Hochberg (Benjamini & Hochberg, 1995) method addresses many of the criticisms of controlling for the Familywise Error rate by controlling for the False Discovery Rate among the statistically significant p values. It thereby has greater statistical power than the Holm-Bonferroni (Holm, 1979) correction. As a result, it is more likely to correctly reject the null hypothesis when it is indeed false and works equally well even when some of the tests are correlated-especially when they are positively correlated (Benjamini & Yekutieli, 2001; Genovese & Wasserman, 2002). It is the method of choice in fields such as astronomy, ecology, and molecular biology, which sometimes make hundreds or thousands of statistical comparisons in a single study (García, 2004). Although replication studies are seen as the best way to deal with false positives by many statistical experts within psychology (e.g., Cohen, 1994; Cumming, 2014; Schmidt, 1996), and have been central to studies of dream content using HVdC categories (Domhoff, 1996, 2003), it is nonetheless useful to have a multiple-test correction formula as a starting point in sorting out new findings. In this regard, the Benjamini-Hochberg (Benjamini & Hochberg, 1995) correction makes the most statistical sense because it focuses on significant p values. It also makes intuitive sense for dream research because the field is still primarily in an exploratory and descriptive (taxonomic) stage. Put another way, dream researchers rarely generate focused hypotheses that are tests of a highly developed and mathematically rigorous theory.

Therefore, a program for calculating Benjamini-Hochberg (Benjamini & Hochberg, 1995) corrections was created by the authors and utilized in the analyses in the next two sections. The results of the program include an adjusted p value for each measurement. Although statisticians do not consider the determination of each adjusted p value to be necessary, the inclusion of "before-and-after" p values in a table makes it possible for readers who doubt the usefulness of any correction formula to decide for themselves whether or not they want to take the multiple-test adjustment into account. The tables also include p values for content indicators that were not significant before the adjustment was made. These values are useful because researchers who are familiar with the HVdC norms can see whether the patterns of significance and nonsignificance found in future studies are familiar to those found in the past.

Correcting for multiple comparisons between two groups

The impact of the multiple-comparison correction on dream content studies using a large number of HVdC indicators is first examined by means of a reanalysis of the original Hall and Van de Castle (1966) normative findings for men and women. The DreamSAT spreadsheet (Schneider & Domhoff, 1995) was used to calculate effect size (Cohen's h) and p values. DreamSAT includes 28 Hall/Van de Castle content indicators, but six were omitted in the current analysis for the following reasons: three indicators (A/C, F/C, and S/C index) do not have p values associated with them; two (Self-Negativity Percent and At Least One Striving) are "metaindicators" that draw from more than one coding category; and one (Dead & Imaginary Percent) is seldom used because of the rarity of the coding elements involved.

As can be seen in detail in Table 1, the results of previous analyses are almost entirely preserved. More specifically, all 12 of the statistically significant differences at the .05 level remain. Of 11 differences that were significant at the .01 level, only one was "downgraded" to the p < .05 level. This change involved the rarely used Torso-Anatomy Percent, which is determined by dividing the number of torso body parts (torso, anatomy, and genitals) by the total number of body parts that are coded. Although seldom used, this indicator can be useful in cases in which inspection of the dream reports or the situation of the dreamers suggests there may be atypical concern with body parts or body imagery; for example, it detected a concern with the body in mastectomy patients in a pre- and post- research design (Giordano et al., 2012). Table 1 displays the specific results.

Table 1. Hall/Van de Castle Male Norms and Female Norms Compared, with 22 p Values
Before and After Adjustment Using the Benjamini-Hochberg Step-Up Algorithm
vs. males)
adjusted p
Male/Female Percent-.41.000**.000**
Familiarity Percent+.27.000**.000**
Friends Percent+.12.003**.006**
Family Percent+.22.000**.000**
Animal Percent-.08.051.087
Social Interaction Percents
Aggression/Friendliness Percent-.16.010*.018*
Befriender Percent-.03.778.815
Aggressor Percent-.12.201.316
Physical Aggression Percent-.34.000**.000**
Indoor Setting Percent+.25.000**.000**
Familiar Setting Percent+.34.000**.000**
Self-Concept Percents
Bodily Misfortunes Percent+.09.338.396
Negative Emotions Percent-.01.891.891
Dreamer-Involved Success Percent-.14.309.396
Torso/Anatomy Percent-.24.005**.010*
Dreams with at Least One:
Good Fortune-.04.555.610
*p < .05    **p < .01

Correcting multiple comparisons between an individual and a group

In addition to group comparisons, the HVdC coding system can be used to determine whether and how the results of an analysis of a lengthy individual dream journal (called a "dream series" by quantitative dream researchers) differ from a normative sample. Individual dream journals kept over long time periods by individuals for their own reasons, without intent of sharing them with anyone, are a form of nonreactive or unobtrusive measure that can yield important results, especially when several dream journals lead to similar results despite the different purposes for which they were kept (Allport, 1942; Baldwin, 1942; Webb, Campbell, Schwartz, Sechrest, & Grove, 1981). In that sense, they have parallels with random sampling in that the use of multiple dream series may tend to cancel out irrelevant variables. Many useful findings with individual dream journals have been replicated several times, including the consistency in what people dream about over months, years, and decades, along with the continuity between the conceptions expressed in dreams and waking thought in relation to important people and avocational interests in the dreamer's life (Bulkeley, 2014; Domhoff, 1996, Chapters 7 and 8; Domhoff, 2003, Chapter 5; Zadra & Domhoff, 2011).

The use of individual dream journals does raise the statistical issue of autocorrelation (the possible lack of independence among a series of responses from a single individual), which can lead to spurious results because many statistical tests are based on the assumption that each response is independent of the previous response. However, the length of time that usually elapses between dream reports, the findings from empirical studies of dreams collected during a single night in a sleep laboratory, and statistical studies of codings from several individual dream series using Wald and Wolfowitz's (1940) nonparametric runs test for randomness demonstrate that autocorrelation has not been a problem in past studies of dream series using several HvDC categories whose elements appear frequently in dream reports (Domhoff & Schneider, 2015).

The analysis in this section uses the results from a random sample of 250 dream reports drawn from a dream series consisting of 3,116 dreams that were written down over a 27-year period. The selected dream reports were coded for characters, social interactions, misfortune/good fortune, success/failure, and emotions using the HVdC system. Content analysis revealed consistency over time as well as continuity with several waking concerns, which was determined by interviews with the dreamer and four of her friends to ask about inferences based on a blind analysis of the dream reports (Domhoff, 2003, Chapter 5). In all, 19 content indicators were calculated and compared with the HVdC female norms. Twelve of the 19 indicators were statistically significant at the .05 level; five of the 12 were also significant at the .01 level. After applying the Benjamini-Hochberg correction, 10 of the 12 previously significant p values remained below .05, with four of five remaining below .01. The two measures that crossed over from significant (p < .05) to nonsignificant (p ≥ .05) had small effect sizes (h = .11 & h = .16). The complete results are shown in Table 2.

Table 2. Series of 250 Dream Reports Compared With the Hall/Van de Castle Female Norms, with 19 p Values
Before and After Adjustment Using the Benjamini-Hochberg Step-Up Algorithm
(vs. Female Norms)
adjusted p
Male/Female Percent+.11.033*.057(a)
Familiarity Percent-.45.000**.000**
Friends Percent-.49.000**.000**
Family Percent-.05.238.302
Animal Percent+.11.012*.025*
Social Interaction Percents
Aggression/Friendliness Percent-.16.010*.018*
Befriender Percent-.03.778.815
Aggressor Percent-.12.201.316
Physical Aggression Percent-.34.000**.000**
Self-Concept Percents
Bodily Misfortunes Percent-.13.210.285
Negative Emotions Percent+.16.042*.066(a)
Dreamer-Involved Success Percent+.34.127.186
Dreams with at Least One:
Good Fortune-.19.012*.025*

*p < .05    **p < .01

a Was significant (p < .05) in the original analysis, but became nonsignificant after the multiple comparison correction was applied.

b p < .01 in the original analysis, but only p < .05 after the multiple comparison correction was applied.

Discussion and conclusion

This article shows that past findings with the HVdC norms are not compromised by the use of multiple tests. Thus, the pattern of gender similarities and differences (e.g., a higher physical aggression percent for men, and a higher percentage of family members and familiar characters in the women's reports) can be treated as substantive findings, especially in the light of replication studies and the similarity of these differences to differences found in waking thought and behavior between men and women (Domhoff, 2005, 2009). Moreover, one-tailed tests can be used in determining statistical significance in future studies that make gender comparisons because the direction of the differences can be predicted on the basis of past studies.

For future researchers concerned with preserving statistically significant differences found in studies using multiple comparisons, the most obvious general strategy is to make as few comparisons as possible. However, given the exploratory nature of many studies of dream reports, this is often impractical and/or undesirable. Even more important, it is important to have large sample sizes to reduce p values. Of course, large effect sizes lower p values as well, but they are a function of the actual size of differences among samples and variables, and not under the control of investigators. There is also a seemingly trivial issue that can affect the results of a correction: rounding errors. It is important to record the original p values with as many decimal places as possible to minimize losses that might occur as a result of rounding. For example, p values of .005 that are rounded to .01 can matter. Therefore, four (or more) decimal places are recommended.

Although the replications of past findings with the HVdC coding system suggest that new findings with large sample sizes are very likely to be real differences, the regular use of the correction formula might provide even greater confidence in the solidity of the results. The program created by the authors to check for possibly spurious rejections of the null hypothesis attributable to multiple testing is therefore available on request as an adjunct to the DreamSAT spreadsheet on dreamresearch.net (Schneider & Domhoff, 1995). With smaller sample sizes, however, the use of the correction formula is problematic because it might eliminate valid statistically significant findings already established by several replication studies. In other words, if the goal of science is to generate new and better ideas based on new findings, then correction formulas need to be used with caution, and all potentially interesting new results should be replicated with larger samples.


Our thanks to Richard L. Zweigenhaft for his editorial suggestions on the original draft of the manuscript.


Allport, G. (1942). The use of personal documents in psychological science. New York, NY: Social Science Research Council.

Baldwin, A. (1942). Personal structure analysis: A statistical method for investigating the single personality. Journal of Abnormal and Social Psychology, 37 163-183. http://dx.doi.org/10.1037/h0061697

Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B. Methodological, 57, 289-300.

Benjamini, Y., & Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics, 29 1165-1188.

Bulkeley, K. (2014). Digital dream analysis: A revised method. Consciousness and Cognition, 29 159-170. http://dx.doi.org/10.1016/j.concog.2014.08.015

Cohen, J. (1977). Statistical power analysis for the behavioral sciences. New York, NY: Academic Press.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Mahwah, NJ: Erlbaum.

Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45 1304-1312. http://dx.doi.org/10.1037/0003-066X.45.12.1304

Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49 997-1003. http://dx.doi.org/10.1037/0003-066X.49.12.997

Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25 7-29. http://dx.doi.org/10.1177/0956797613504966

Domhoff, G. W. (1996). Finding meaning in dreams: A quantitative approach. New York, NY: Plenum Press. http://dx.doi.org/10.1007/978-1-4899-0298-6

Domhoff, G. W. (2003). The scientific study of dreams: Neural networks, cognitive development, and content analysis. Washington, DC: American Psychological Association. http://dx.doi.org/10.1037/10463-000

Domhoff, G. W. (2005). The dreams of men and women: Patterns of gender similarity and difference. The quantitative study of dream content. Retrieved from http://dreamresearch.net/Library/domhoff_2005c.html

Domhoff, G. W. (2009). Gender differences between men and women. In S. Krippner & D. Joffe-Ellis (Eds.), Perchance to dream: The frontiers of dream psychology (pp. 153-163). New York, NY: Nova Science Publishers.

Domhoff, G. W., & Schneider, A. (2008). Similarities and differences in dream content at the cross-cultural, gender, and individual levels. Consciousness and Cognition, 17 1257-1265. http://dx.doi.org/10.1016/j.concog.2008.08.005

Domhoff, G. W., & Schneider, A. (2015). Assessing autocorrelation in studies using the Hall and Van de Castle coding system to study individual dream series. Dreaming, 25 70-79. http://dx.doi.org/10.1037/a0038791

Dudley, L., & Fungaroli, J. (1987). The dreams of students in a women's college: Are they different? ASD Newsletter, 4 6-7.

Dudley, L., & Swank, M. (1990). A comparison of the dreams of college women in 1950 and 1990. ASD Newsletter, 7 3.

Ellis, P. D. (2010). The essential guide to effect sizes: Statistical power, meta-analysis, and the interpretation of research results. Cambridge, England: Cambridge University Press. http://dx.doi.org/10.1017/CBO9780511761676

Ferguson, G. A. (1981). Statistical analysis in psychology and education. New York, NY: McGraw-Hill.

Franklin, R. D., Allison, D. B., & Gorman, B. S. (1997). Design and analysis of single-case research. Mahwah, NJ: Erlbaum.

García, L. (2004). Escaping the Bonferroni iron claw in ecological studies. Oikos, 105 657-663. http://dx.doi.org/10.1111/j.0030-1299.2004.13046.x

Genovese, C., & Wasserman, L. (2002). Operating characteristics and extensions of the false discovery rate procedure. Journal of the Royal Statistical Society Series B. Methodological, 64 499-517. http://dx.doi.org/10.1111/1467-9868.00347

Giordano, A., Francese, V., Peila, E., Tribolo, A., Airoldi, M., Torta, R., ... Cicolin, A. 2012). Dream content changes in women after mastectomy: An initial study of body imagery after body-disfiguring surgery. Dreaming, 22, 115-123. http://dx.doi.org/10.1037/a0026692

Hall, C. (1969). Content analysis of dreams: Categories, units, and norms. In G. Gerbner (Ed.), The analysis of communication content (pp. 147-158). New York, NY: Wiley.

Hall, C. S., Domhoff, G. W., Blick, K. A., & Weesner, K. E. (1982). The dreams of college men and women in 1950 and 1980: A comparison of dream contents and sex differences. Sleep, 5 188-194.

Hall, C., & Van de Castle, R. (1966). The content analysis of dreams. New York, NY: Appleton-Century-Crofts. Hauri, P. (1975). Categorization of sleep mental activity for psychophysiological studies. In G. Lairy & P. Salzarulo (Eds.), The experimental study of sleep: Methodological problems (pp. 271-281). New York, NY: Elsevier Scientific Publishing.

Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6 65-70.

Karagianni, M., Papadopoulou, A., Kallini, A., Dadatsi, A., Abatzoglou, G., & Zilikis, N. (2013). Dream content of Greek children and adolescents. Dreaming, 23 91-96. http://dx.doi.org/10.1037/a0032238

Mazandarani, A. A., Aguilar-Vafaie, M. E., & Domhoff, G. W. (2013). Content analysis of Iranian college students' dreams: Comparison with American data. Dreaming, 23 163-174. http://dx.doi.org/10.1037/a0032352

Moran, H. (2003). Arguments for rejecting the sequential Bonferroni in ecological studies. Oikos, 100 403-405. http://dx.doi.org/10.1034/j.1600-0706.2003.12010.x

Németh, G., & Bányai, E. (2011). The relationship between dream contents and quality of life. Mentálhigiéné és Pszichoszomatika, 12 299-326.

Oberst, U., Charles, C., & Chamarro, A. (2005). Influence of gender and age in aggressive dream content in Spanish children and adolescents. Dreaming, 15 170-177. http://dx.doi.org/10.1037/1053-0797.15.3.170

Paul, F., & Schredl, M. (2012). Male-female ratio in waking-life contacts and dream characters. International Journal of Dream Research, 5 119-124.

Perneger, T. V. (1998). What's wrong with Bonferroni adjustments. British Medical Journal, 316 1236-1238. http://dx.doi.org/10.1136/bmj.316.7139.1236

Prasad, B. (1982). Content analysis of dreams of Indian and American college students: A cultural comparison. Journal of Indian Psychology, 4 54-64.

Reynolds, H. (1984). Analysis of nominal data. Newbury Park, CA: Sage.

Rosenthal, R., & Rubin, D. B. (1982). A simple general purpose display of magnitude of experimental effect. Journal of Educational Psychology, 74 166-169. http://dx.doi.org/10.1037/0022-0663.74.2.166

Rothman, K. J. (1990). No adjustments are needed for multiple comparisons. Epidemiology (Cambridge, Mass.), 1 43-46. http://dx.doi.org/10.1097/00001648-199001000-00010

Schmidt, F. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1 115-129. http://dx.doi.org/10.1037/1082-989X.1.2.115

Schneider, A., & Domhoff, G. W. (1995). The quantitative study of dreams. Available at http://www.dreamresearch.net/

Schneider, A., & Domhoff, G. W. (1999). DreamBank. Retrieved from http://www.dreambank.net/

Schredl, M., Ciric, P., Bishop, A., Golitz, E., & Buschtons, D. (2003). Content analysis of German students' dreams: Comparison to American findings. Dreaming, 13 237-243. http://dx.doi.org/10.1023/B:DREM.0000003145.26849.37

Smith, C. (2000). Content analysis and narrative analysis. In H. Reis & C. Judd (Eds.), Handbook of research methods in social and personality psychology (pp. 313-335). New York, NY: Cambridge University Press.

Strauch, I. (2005). REM dreaming in the transition from late childhood to adolescence: A longitudinal study. Dreaming, 15 155-169. http://dx.doi.org/10.1037/1053-0797.15.3.155

Strauch, I., & Meier, B. (1996). In search of dreams: Results of experimental dream research. Albany, NY: SUNY Press.

Tonay, V. (1990/1991). California women and their dreams: A historical and sub-cultural comparison of dream content. Imagination, Cognition and Personality, 10 85-97. http://dx.doi.org/10.2190/M29J-QQTB-NMYD-QP1F

Van de Castle, R. (1969). Problems in applying methodology of content analysis. In M. Kramer (Ed.), Dream psychology and the new biology of dreaming (pp. 185-197). Springfield, IL: Charles C. Thomas.

Wald, A., & Wolfowitz, J. (1940). On a test whether two samples are from the same population. Annals of Mathematical Statistics, 11 147-162. http://dx.doi.org/10.1214/aoms/1177731909

Webb, E., Campbell, D., Schwartz, R., Sechrest, L., & Grove, J. (1981). Nonreactive measures in the social sciences (2nd ed.). Chicago, IL: Rand McNally.

Yamanaka, T., Morita, Y., & Matsumoto, J. (1982). Analysis of the dream contents in Japanese college students by REMP-awakening technique. Folia Psychiatrica et Neurologica Japonica, 36 33-52.

Zadra, A., & Domhoff, G. W. (2011). The content of dreams: Methods and findings. In M. Kryger, T. Roth, & W. Dement (Eds.), Principles and Practices of Sleep Medicine (5th ed., pp. 585-594). Philadelphia, PA: Elsevier Saunders. http://dx.doi.org/10.1016/B978-1-4160-6645-3.00050-5

Go back to the Dream Library index.

dreamresearch.net home page dreamresearch.net contact info