Preventing and Treating Intertrigo in the Large Skin Folds of Adults: A Literature Overview

Intertrigo is an inflammatory dermatosis of the skin folds of the body, for which a large variety of topical medications may be recommended. A systematic literature review was performed to find scientific evidence for preventing and treating intertrigo within the nursing domain. Seven electronic databases were searched with a simple broad-scope search strategy. The aim was to identify all publications that concerned intertrigo itself and other conditions that were related to intertriginous regions. This search produced 451 references. A final set of 24 studies was retained and analyzed on content and methodologic quality. Most studies concerned treatments with antifungals or disinfectants in heterogeneous research samples, with only small subsamples of people with intertrigo. Six studies were randomized controlled trials. In general, the methodologic quality of the studies was poor. The analyzed studies provided no scientific evidence for any type of nursing prevention or treatment strategy. There is a great need for well-designed clinical studies on intertrigo.

Intertrigo is an inflammatory dermatosis involving body folds (Arndt & Bowers, 2002). Intertrigo is found principally in the inframammary, axillary, and inguinal folds, but it may also affect other areas as folds of the eyelids, neck creases, antecubital fossae, and umbilical perineal and interdigital areas (Arndt & Bowers, 2002). Although the exact pathogenesis of intertrigo is unknown (Burkhart, Mulholland, & Burnham, 1981), it is generally believed (McMahon, 1994) to be primarily caused by skin-on-skin friction: opposing skin surfaces rub against each other, causing erosions that become inflamed (Itin, 1989; Selden, 2001). The condition is considered to be aggravated by heat, moisture, maceration, and lack of air circulation (Itin, 1989; Selden, 2001). As a result of skin constantly rubbing on skin, heat and moisture lead to maceration and inflammation, and often to secondary bacterial or fungal infection (Arndt & Bowers, 2002; Burkhart et al., 1981). Intertrigo is characterized initially by mild erythema; red plaques oppose each other on each side of the skin fold, almost in a mirror image. This may then progress to more intense inflammation with erosions, oozing, exudation, and crusting (Arndt & Bowers, 2002). Patients may complain of itching, burning, and pain in the affected areas (Itin, 1989).

The exact prevalence of intertrigo is unknown, because many dermatologic diseases can affect the skin folds, which makes the diagnosis of intertrigo difficult (Guitart & Woodley, 1994; Itin, 1989), also due to unclear definitions and overlap in medical taxonomies between intertrigo, dermatomycoses, and bacterial skin infections. However, McMahon (1991) found submammary intertrigo (defined as a submammary skin problem of any description) to be prevalent in 5.8% of all female inpatients. It is generally believed that intertrigo is more common in patients with poor hygiene, obese patients, and patients with diabetes (Arndt & Bowers, 2002; Selden, 2001).

Nurses are considered to be the most appropriate health care professionals to prevent, detect, and treat intertrigo (McMahon & Buckeldee, 1992). The prevention and treatment of intertrigo is often one of the first subjects that is addressed in many nursing training programmes (van Beelen, 2001).

The medical and nursing literature contains several articles with advice for preventing and treating intertrigo. It is most remarkable, however, that the majority of the recommendations lack an empirical basis. A MESH-search (August 2002) with “intertrigo” in Pubmed, limited to “randomized controlled trial” produced only two references (Cullen, 1977; Hedley, Tooley, & Williams, 1990). This makes it difficult to choose the best evidence-based approach for preventing and treating intertrigo.

In general there is agreement in the literature that the best prevention for intertrigo is to minimize causal factors such as skin- on-skin friction, heat and moisture in and around skin folds, and by keeping the high-risk areas clean and dry. Most authors also agree that treatment of intertrigo should focus primarily on reducing causal factors and secondarily on restoring a normal skin environment and treatment of the possible bacterial or fungal infections in intertriginous areas. Surgical treatment of intertrigo, such as reduction mammaplasty (Brown & Young, 1993; Chadbourne et al., 2001), may also be indicated.

Although there is agreement on the general principles, a wide variety of methods and products is recommended to achieve treatment goals, but in most cases no research evidence is available to support the recommendations. Not exhaustive, but exemplary, some of these advised applications include: English cotton (van Beelen, 2001), wet tea bags (Selden, 2001), talcum powder with aspirin (Marghescu, 1970), Castellani’s paint (Bohm, 1996), Domeboro soaks (Aly, Forney, & Bayles, 2001), hydrocolloid dressings (McMahon, 1994), Burow’s solution (Fragola & Watson, 1981), diluted vinegar (Davis, 1980), hamamelis (this is a tree, “witch hazelnut,” from which a lotion is prepared) (Schindera, 1999), bedrest (the authors of this article do not explain why they advise bedrest for intertrigo nor give the theoretical or scientific grounds why bedrest would work; as stated earlier, this is common for most of the cited interventions) (Moulin, Gaboriaux, & Joseph, 1983), disrobing (Arndt & Bowers, 2002) (the authors advise to have the patient disrobe for at least 30 minutes twice a day and expose the involved folds to hair dryer, fan, or an electric bulb to promote drying). Furthermore they advise wearing light, nonconstricting, and absorbent clothing, while avoiding wool, nylon, and synthetic fibers. Biotextiles (cotton or polyester gauze in which antiseptic molecules, such as zeolite or triclosane, are fixed) are recomended (Benigni, Cazaubon, & Djian, 2000).

This finding is similar to that of McMahon and Buckeldee (1992) who interviewed 115 ward nurses on their knowledge and current practice of intertrigo. The nurses suggested 12 different creams and a variety of other substances such as rose oil, yoghurt, etc. They also found that nurses have contradictory opinions with regard to both cause and determining factors as well as diagnostic and treatment procedures: for instance, 16.5% of the respondents in this study favored the use of talcum powder, while 15.7% stated this should be avoided; with regard to diagnostic investigations 39% of the nurses would not initiate any test at all, while others would take microbiologic swaps or perform some kind of fungal investigation (McMahon & Buckeldee, 1992).

There is also disagreement about whether a product is good, or should be avoided; for example, applying absorbing powders is encouraged by some (Guitart & Woodley, 1994; Kugelman, 1969; Marghescu, 1970; Selden, 2001) and discouraged by others (van Beelen, 2001). Sometimes an antiseptic such as clioquinol is advocated (Arndt & Bowers, 2002; Wanic, 1967) but others (Gloor, 1988) advise against it. Different views are also reported in the literature with regard to which topical antibacterial and antifungal products should be used, and whether or not they should be combined with steroids.

In order to create evidence-based nursing guidelines and/or to support existing nursing guidelines with evidence from research, a systematic review was made of the literature on the efficacy of prophylaxis and treatment of intertrigo from the nursing perspective. The aim of this review was to systematically collect and assess the evidence in relation to the prevention and treatment of intertrigo in an adult population within the nursing domain.

Method

Study configuration. The study was limited to adult patients with intertrigo of the large skin folds (submammary, axillary, inguinal, perianal, abdominal), because this is the condition that is mainly encountered by nurses and health care workers, and because a systematic review of treatment for intertrigo between the toes has already been performed (Crawford et al., 2001). The working definition of intertrigo for this review is infected or noninfected inflammation of the large skin folds, as described by the authors or referred to as “intertrigo.”

Intervention. Only manuscripts concerning methods and products that are within the nursing domain were reviewed. The nursing domain is defined as interventions that can be carried out without doctor’s order (for example, topical applications and drugs that are available without medical prescription). The Dutch official list of prescription drugs (www.cbg-meb.nl) was used to decide whether a product fell within the definition of the nursing domain. If a certain product was not found on this list (for example, due to non- availability in the Netherlands), the article in which it was mentioned was included. (If a product was not on the list, it could not be determined if it was a medical or nonmedical product. In that case the product was given “the benefit of doubt,” because it is possible that this product is freely available in other countries (for instance Vioform is not on the Dutch list, but is freely available in Bulgaria). It did not matter who applied a product to the intertrigo areas (doctor, patient, nurse, \or another caregiver).

Since it was clear from a preliminary search that there were not many randomized controlled trials on the subject, it was decided to include all manuscripts that reported results of empirical research, no matter what design they used, or whether or not there was any kind of control group.

There was no predefined specific outcome. Outcomes such as prevalence of intertrigo, skin condition, skin infection, symptom scores, microbiological tests, use of antibiotics and topical agents, and costs of treatments could be included.

Inclusion criteria were (a) empirical research based on one of the following study designs (randomized controlled clinical trial, controlled clinical trial, other designs as patient series and pre- post studies), (b) intertrigo in adult patients, (c) intertrigo in large skin folds, (d) preventive and therapeutic interventions, of which at least one (main or control intervention) is within the nursing domain, and (e) outcomes such as intertrigo prevalence, skin condition, skin infection, use of antibiotics and topical agents, costs of treatment.

Exclusion criteria included (a) articles not reporting results of empirical research, (b) articles concerning research in animals, (c) intertrigo in body extremities (for example, interdigital of hands or feet), (d) intertrigo in children (for example, diaper rash), (e) surgical treatments, and (f) medically prescribed pharmacological agents.

Search Strategy. Relevant studies were identified by searching electronic databases (PUBMED, CINAHL, EMBASE, Cochrane Controlled Trials Register, Science Citation Index, PICARTA, and INVERT).

Databases were searched with a simple broad-scoped strategy (intertrigo [MESH] OR intertrig* [textword]) limited by NOT animal). This strategy was used in all databases with only minor adaptations for each specific database (for example, in the Dutch databases PICARTA and INVERT, the search strategy was completed with the Dutch term “smetten”). With this search strategy it was expected to identify all articles concerning intertrigo itself and all articles concerning diseases and conditions in the intertriginous regions. Before the final search was performed, other search strategies were tried (for example, (“dermatomycoses”[MeSH Terms] OR dermatomycoses [Text Word])) AND (“skin” [MeSH Terms] OR skin [Text Word]) AND fold [All Fields]), but these either gave less references or added no further references to the final search strategy.

Additional searching was done by checking reference lists from obtained articles and by forward searching in the Science Citation Index for articles that cited some of the relevant articles that had already been obtained previously. Since it was clear from preliminary searches that there was no great volume of literature available, and to ensure finding as much evidence as possible, no limits were applied with regard to languages, or date and type of publication. The time frame for the search covered the complete databases from their start until June 2002.

Figure 1.

Criteria for Methodologic Quality of Randomized Control Trials

Inclusion procedure. An initial check on inclusion and exclusion criteria was made by two reviewers independently on the basis of the title and, if available, the abstract of the manuscript. In case of disagreement, the reviewers discussed their findings until consensus was achieved.

If both reviewers decided to include a reference, or they both had doubts, or they could not achieve consensus about a disagreement, the full text of the reference was ordered via the international library loan systems. Articles that were not obtained within 4 months after ordering were considered to be unobtainable and excluded from further processing.

The included articles initially were then all checked again against the inclusion and exclusion criteria, this time on the basis of the full text, and by the same two reviewers independently. Articles that met all the criteria, according to both reviewers, were selected for further content and methodologic analysis. In the case of disagreement, both reviewers tried to find consensus and if this was not reached a third reviewer made the final decision to include or exclude the study.

Data extraction. By means of a self-made data extraction sheet, the following details were extracted from the included articles:

* Reference characteristics (title, journal, author, publication date, country of origin).

* Study characteristics (number of patients, study design, intervention and co-interventions).

* Patient characteristics (gender, age, main medical diagnoses, onset and duration of intertrigo).

Figure 2.

Criteria for Methodologic Quality of ‘Other Designs’

Figure 3.

Search Results

* Outcome measures (type, method, time and frequency of assessment).

Methodologic quality of the final set of manuscripts was also assessed. The variety of study designs included in this systematic review necessitated the use of different quality assessment tools. Methodologic quality of the randomized controlled trials was rated according to a list developed by van Tulder, Assendelft, Koes, and Bouter (1997). This list has been recommended and adapted by the Cochrane Back Group as the instrument to assess methodologic quality (Bombardier et al., 2002).

The list consists of 11 criteria for internal validity, six descriptive criteria and two statistical criteria (see Figure 1). All criteria were scored as yes, no, or unclear. Equal weight was applied to all items. Scores could range from 0 to 19, by counting the yes answers as 1 and the others as 0. So, the higher the score, the higher the methodologic quality of the manuscript. Studies were considered to be of sufficient quality if at least six criteria for internal validity, three descriptive, and one statistical criteria were scored positively.

The methodological quality of studies based on other designs than randomized controlled trials was rated with an adapted version of the van Tulder list, as developed and used by Steultjens et al. (2002). This list of criteria consists of seven criteria for internal validity, four descriptive criteria, and two statistical criteria (see Figure 2). All criteria were scored as yes, no, or unclear. Equal weight was applied to all items. Scores could range from 0 to 13, by counting the yes answers as 1 and the others as 0. So, the higher the score, the higher the methodologic quality of the manuscript. Studies were considered to be of sufficient quality if at least four criteria for internal validity, two descriptive, and one statistical criteria were scored positively.

The cut-off scores for final quality assessment were derived from Steultjens et al. (2002). The methodologic quality of the included trials was independently assessed by two reviewers. Disagreements were resolved by discussion; if no consensus was achieved, a third reviewer made the final decision.

Results

Search results. Figure 3 presents a flow diagram of the results of the search strategy. All searches were carried out in June 2002. The electronic database searches produced a total of 451 references across all databases; 303 references were found in PUBMED, 258 in EMBASE, 106 in SCI, 34 in PICARTA, 15 in Cochrane, 3 in CINAHL, and 1 in INVERT. Although there was a considerable overlap between databases, all but one database added exclusive manuscripts (159, 116, 6, 5, 2, and 2, respectively, exclusive in PUBMED, EMBASE, PICARTA, SCI, Cochrane, and CINAHL).

The title was accompanied by an abstract in 60% of the cases. Of the 451 references, 102 (22.6%) were initially included for further study of the full text; 5 references could not be obtained within 4 months, and 17 articles, which fulfilled all the criteria for inclusion, were processed for further methodologic analysis. Additional searches resulted in ordering 12 full-text articles of which 7 met inclusion criteria. The final set consisted of 24 (see Table 1, p. 50). Details about the excluded studies can be obtained from the authors.

General characteristics of the final set. There is very little recent research; 5 manuscripts were published in the last decade of the 20th century, and 12 of the 24 studies were performed at least 20 years ago. The manuscripts were published in 21 different journals, and 23 different first authors were listed. Studies were performed in 13 different countries; 6 studies originated from Germany.

With regard to study design, six randomized controlled trials were found, six controlled trials (three of which used the patients as their own controls) and 12 studies used other noncontrolled designs.

Methodologic quality score was “sufficient” in four studies (Cullen, 1977; Cullen, Rex, & Thorne, 1984; Galbiati & Scarabelli, 1998; Mertens, Morias, & Verhamme, 1976). An overview of the study designs and their methodologic quality scores is presented in Table 1.

Patients. Patients with intertrigo were mostly included in larger research samples covering a variety of medical diagnoses (see Table 2, p. 51), mainly under the umbrella of dermatomycoses. It was difficult to determine how many patients with intertrigo of the large skin folds were included in each study. In most studies, intertrigo was not or not clearly defined. The onset and duration of the condition varied considerably. Two studies (McMahon, 1994; Schindera, 1999) explicitly stated that their sample concerned noninfected intertrigo. From the details that were given in the manuscripts, the samples appeared to differ quite a lot with regard to inclusion and exclusion criteria (setting, co-morbidity, age, gender, duration of condition, etc.).

Sizes of the total study populations varied from 14 to 683 patients (median=41, mean=89.3, SD=156). However, the subsamples of patients with intertrigo of large skin folds varied from 1 to 46 (median=12.5, mean=15.4, SD=14.2). In six studies it was unclear how many patients with intert\rigo were included in the research population. Studies that used some method of control had sample sizes of patients with intertrigo varying from 9 to 42, and some of them were spread over two to five treatment groups.

Interventions and control/comparisons. As can be seen from Table 3, p. 52, no less than 25 (excluding placebo agents) different products and methods were reported in these 24 manuscripts. In 12 studies the intervention was studied without any form of control treatment, and three (Cullen, 1977; Cullen et al., 1984; Gip, 1966) of the other 12 studies in which two or more agents were compared used a placebo control treatment

Four products, econazole (Cullen et al., 1984; Hempel, 1975; Scherwitz, 1977; Schwarz, Much, Konzelmann, 1975; Siboulet, 1976; Wurster, 1993), miconazole (Cullen, 1977; Galiano, Ruggiero, Gregori, & Menozzi, 1990; Mertens et al., 1976; Torok et al., 1979), tolnaftate (Hackbarth& Markson, 1966; Meyer-Rohn, 1979), tioconazole (Somorin, 1985; Taube, Duhr, Koepke, & Haustein, 1995) were investigated in more than one study.

Between, and even within products, there is a great variety in strength of composition, combination with other products, application form, and frequency and duration of the treatment. Moreover, within one study the intervention could differ from patient to patient (for example, one patient applied the cream for 1 week and another for 3 weeks). It was also not always clear who applied the products to the skin, and none of the studies were explicit about whether the application was, in fact, performed in the right dose and frequency. All this makes comparisons between treatment groups within studies difficult, and comparisons across studies almost impossible.

Overall, the products that were studied can be labeled as antifungal agents or as disinfectants and, as such, are directed at treating the secondary infections of the intertriginous lesions. Only two studies (McMahon, 1994; Peltonen & Havu, 1981) focused solely on reducing the causal factors of the intertrigo, and none of the studies reported on the prevention of intertrigo.

Outcome measurements. Most studies used objective and subjective methods to measure the effectiveness of the treatments. The objective measures included microscopic examination and cultures of bacteria and fungi, and the subjective measures included the rating of symptom (itching, erythema, exudation, burning, etc.) presence and severity, by clinicians and/or patients, and the rating of cure (for example, amelio-rated, worsened, cured). None of the studies described the psychometric properties of their subjective outcome measurement instruments or made any reference to their validity. Moreover, the outcomes in the various studies were measured at different times and in different frequencies.

Findings. In most studies some favorable effects of the remedies were found or advocated for the total research population. As shown in Table 4 (p. 54), the effects for the subsample of patients with intertrigo of the large skin folds could be traced in 12 of the 24 studies.

Positive effects (clinically and microbiologically) were reported for econazole (Cullen et al., 1984; Hempel, 1975; Scherwitz, 1977; Schwarz et al., 1975; Siboulet, 1976). No effect was found for dibenzthieen (Gip, 1966) or for the typical nursing interventions studied by McMahon (1994). In the noncontrolled studies, positive effects were claimed by the authors for Melaleuca alternifolia (also named the “tea tree,” a plant growing in Australia from which an essential oil is made) (Belaiche, 1985) and Hamamelis virginiana (a tree, “witch hazelnut,” from which a lotion is prepared) (Schindera, 1999).

Table 1.

Study Designs and Methodological Quality Score

Table 2.

Study Populations and Sample Sizes

Table 3.

Interventions and Control Treatments

Table 4.

Effects in Intertrigo Subsamples

Discussion

The prevention and treatment of intertrigo of the large skin folds with remedies that lie within the nursing domain lack almost all forms of evidence. Research articles are very scarce, especially on the measures that are often recommended to reduce the believed causal factors of intertrigo. The scarce empirical evidence that was found in this review mainly concerned the effect of antifungal therapeutics.

Moreover, the research that was found on the subject suffers from several weaknesses, such as ill-defined research populations, lack of control groups, small samples sizes, many interventions in very heterogeneous populations, and weak research designs, etc.

Therefore this review can provide no basis, evidence, advice, or contra-advice for any particular nursing treatment. Perhaps the only thing that can be said is that some type of antifungal agent, such as econazole, may be of value if the intertrigo is infected by fungi. But only one study (Cullen et al., 1984) reporting such an effect was placebo controlled and was of sufficient methodologic quality; all other studies focusing on econazole had weaker designs and small sample sizes. It should also be noted that McMahon (1994) found that no submammary intertriginous lesion in his study was infected by a fungus, and that Hedley et al. (1990) found that only 7 of the 83 patients with intertrigo had a positive fungal culture.

This review obviously has several limitations. It is not possible to offer “a best treatment” advice since the review excluded all literature in which medical treatment was discussed as surgery, combinations of antifungals with corticosteroids, topical antibiotics, systemic treatment, etc. It is possible that reviews of those articles could result in well-evidenced treatments, but this is not very likely because, as mentioned earlier, a PUBMED search with “intertrigo” [MESH] AND “randomized controlled trial” [pt] revealed only two references: one in which Cullen (1977) investigated the efficacy of miconazole, and which is included in this review; and another study (Hedley et al., 1990) comparing hydrocortisone cream versus hydrocortisone plus miconazole cream, in which both products were found to be equally effective. Moreover, it can be assumed that those “medical” articles will focus on treatments for infected intertrigo and not on preventing intertrigo or reducing the causal factors, what doctors consider to be the nursing domain. A glance at some of these, for this review excluded “medical” studies, reveals many noncontrolled trials and small sample sizes exist (Grigoriu & Grigoriu, 1982; Hedley et al., 1990; Minelli & Ragher, 1989; Radovic-Kovacevic, Ratkovic, & Milenkovic, 1990; Reiffers, 1981; Venier, Carnevali, Alessandrini, & Urbani, 1982). Moreover, recent clinical medical guidelines state that very little evidence exist (de Kock et al., 1997; Rex et al., 2000).

Implications

This review has several implications. First, from a taxonomy point of view, it seems important to formulate a clear definition of intertrigo and how it can be distinguished from other skin diseases, and especially from the various forms of dermatomycosis. This was a problem that was encountered in the selection procedure of this review; most authors did not mention what they mean by intertrigo, and the instruments and methods to diagnose the condition were often not stated. Generally, most publications focused on intertrigo infected by fungi or they made no differentiation between infected intertrigo and the noninfected or “simple” intertrigo as mentioned by McMahon (1991).

Second, it is easy to propose a research agenda for intertrigo. Research is needed on the pathogenesis of intcrtrigo. What comes first, infection or maceration? What is necessary for the development of intertrigo: rubbing, occlusion, moisture, heat? Further, it would be worthwhile to conduct prevalence studies; it is a shame that no one knows how many patients suffer from intertrigo, while it is considered to be a common condition (van Beelen, 2001). And is it true that intertrigo is more prevalent in diabetic or obese patients, as many authors suggest? However, such a prevalence study should be based not only on a clear definition of intertrigo, but also on reliable and valid diagnostic instruments, which do not yet seem to exist.

There is also a great need for research on preventive measures: what is the effect of drying skin folds with a hair dryer, how often should the skin folds be washed, are soap or deodorants of value, can natural fabrics prevent intertrigo, etc? This applies to the many therapeutic interventions that are recommended in the literature. As long as everybody swears by his or her own method, but no research is carried out, patients are exposed to therapies that are probably ineffective or harmful, and unnecessary costs are incurred.

Third, no research basis was found for any of the current nursing approaches in the prevention or treatment of intertrigo. Nurses and other caregivers should see this as a challenge to discuss their current practices with each other and to initiate clinical studies on the subject.

With regard to the development of a clinical nursing guideline on intertrigo, there is no published evidence on which to base any recommendations. Therefore, the development of the guideline will have to rely on other methods for evidence grading, such as expert opinion, consensus meetings, and common sense. Useful information may also be derived from partly related guidelines, such as on pressure ulcers, neonatal skin care, skin care in general, and medical guidelines on the diagnosis and treatment of dermatomycoses and bacterial skin infections.

A final remark concerns the usefulness of performing a systematic review, such as the one described here. At the start of this project it was clear there was very little hard evidence on the subject, and therefore the inclusion criteria were extended to all kinds of research designs. This made it possible to identify more empirical studies, although it was som\etimes difficult to find the publications. However, the greater number of studies that were finally identified did not change the conclusion that was obvious at the start: there is very little evidence available on the subject. However, it may be of value to demonstrate there is a considerable amount of research that could be improved and to show that the nursing profession lacks a research basis for a basic nursing problem.

References

Aly, R., Forney, R, & Bayles, C. (2001). Treatments for common superficial fungal infections. Dermatology Nursing, 13(2), 91-94, 98- 101.

Arndt, K., & Bowers, K. (2002). Manual of dermatologic therapeutics (6th ed.). Philadelphia: Lippincolt Williams & Wilkins.

Belaiche, P. (1985). Traitement des infections cutanees par l’huile essentielle de melaleuca alternifolia [Treatment of skin infections with the essential oil of melaleuca alternifolia]. Phytotherapy, 15, 15-17.

Benigni, J.-P., Casaubon, M., & Djian, B. (2000). Les biotextiles antiseptiques: Leur interet dans la contention medicale [Antiseptic biotextiles and their role in medical support]. Angeiologie, 52, 43- 45.

Bohm, I. (1996). Candida-Intertrigo (ICD B37.2): Therapeutische trends und leitlinien [Candida-inlertrigo (ICD B37.2): Therapeutic trends and guidelines]. Der Deutsche Dermatologe, 44, 39.

Bombardier, C., Bouter, L., de Bie, R., Deyo, R., Guillemin, F., Shekelle, P. et al. (2002). Cochrane Back Group. Toronto: The Cochrane Library.

Brown, D.M., & Young, V.L. (1993). Reduction mammoplasty for macromastia. Aesthetic Plastic Surgery, 17, 211-223.

Burkhart, C.G., Mulholland, M.B., & Burnham, J.C. (1981). Scanning electron microscopic evidence of bacterial overgrowth in intertrigo. Journal of Cutaneous Pathology, 8, 273-276.

Chadbourne, E.B., Zhang, S., Gordon, M.J., Ro, E.Y., Ross, S.D., Schnur, P.L. et al. (2001). Clinical outcomes in reduction mammaplasty: A systematic review and meta-analysis of published studies. Mayo Clinic Proceedings, 76, 503-510.

Crawford, F., Hart, R., Bell-Syer, S., Torgerson, D., Young, P., & Russell, I. (2001). Athlete’s foot and fungally infected toenails. British Medical Journal, 322, 288-289.

Cullen, S.I. (1977). Cutaneous candidiasis: Treatment with miconazole nitrate. Cutis, 19, 126-129.

Cullen, S.I., Rex, I.H., & Thorne, E.G. (1984). A comparison of a new antifungal agent, 1 percent econazole nitrate (Spectazole(TM)) cream versus 1 percent clotrimazole cream in the treatment of intertriginous candidosis. Current Therapeutic Research, 35, 606- 609.

Davis, S.A. (1980). How I treat groin eruptions. Medical Times, 108, 54-59.

de Kock, C.A., Duyvendak, R.J., Jaspar, A., Krol, S.J., Hoeve van, J.A., Romeijnders, A.C., et al. (1997). NHG-standaard dermatomycosen [NHG-guideline dermatomycoses]. Huisarts en Wetenschap, 40, 541-552.

Fragola, L.A.J. & Watson, P.E. (1981). Common groin eruptions: Diagnosis and treatment. Postgraduate Medicine, 69, 159-159, 172.

Galbiati, G., & Scarabelli, G. (1998). Trattamento delle intertrigini con una soluzione antisettica a base di cloroxilenolo ed eosina: Atudio controllato in doppio ciecco verso soluzione die eosina [Treatment of interlrigo with a chloroxylenol and eosine antiseptic solution. A controlled double-blind study versus eosine solution]. Giornale Italiano di Dennatologia e Venereologia, 133, 299-303

Galiano, P., Ruggiero, G., Gregori, S., & Menozzi, M. (1990). Dermatiti superficiali micotiche trattate con una nuova crema al 2% di miconazolo in liposomi [Superficial mycotic dermatitis treated with a 2% miconazole cream in liposomes]. Dermatologia Oggi, 5, 60- 63.

Gip, E. (1966). Klinische prufung der Fungiplex-Salbe an einem geriatrischen material mit intertriginoser candidamykose [Clinical testing of Fungiplex ointment on a geriatric material with intertriginous Candida mycosis]. Dermatologische Wochenschrift, 152, 482-484.

Gloor, M. (1988). Windeldermatitis und ahnliche hauterkrankungen [Diaper rash and similar skin lesions]. Medizinische Monatsschrift fur Pharmazeuten, 11, 346-347.

Grigoriu, A., & Grigoriu, D. (1982). Superficial mycoses: Ketoconazole treatment. Mykosen, 25, 258-262.

Guitart, J., & Woodley, D.T. (1994). Intertrigo: A practical approach. Comprehensive Therapy, 20, 402-409.

Hackbarth, D.E., & Markson, E. S. (1966). A new fungicidal agent for intcrtriginous dermatomycosis. Current Therapeutic Research, 8, 175-178.

Hedley, K, Tooley, P., & Williams, H. (1990). Problems with clinical trials in general practice – a double-blind comparison of cream containing miconazole and hydrocortisone with hydrocortisone alone in the treatment of intertrigo. British Journal of Clinical Practice, 44, 131-135.

Hempel, M. (1975). Klinische erfahrungen in der lokalen behandlung von dermatomykosen mit econazol-hautmilch [Clinical experiences with the topical treatment of dermatomycoses with econazol-skinmilk]. Mykosen, 18, 213-219.

Itin, P. (1989). Intertrigo – ein therapeutischer problemkreis/ Intertrigo – a therapeutic problem circle. Therapeutische Umschau, 46, 98-101.

Kugelman, T.P. (1969). Intertrigo – diagnosis and treatment. Connecticut Medicine, 33, 29-36.

Marghescu, S. (1970). Die intcrlrigo, ihre Prophylaxe und behandlung [Intertrigo, its prevention and treatment]. Therapie der Gegenwart, 109, 813-814.

McMahon, R. (1991). The prevalence of skin problems beneath the breasts of in-patients. Nursing Times, 87, 48-51.

McMahon, R. (1994). An evaluation of topical nursing interventions in the treatment of submammary lesions. Journal of Wound Care, 3, 365-366.

McMahon, R., & Buckeidee, J. (1992). Skin problems beneath the breasts of inpatients: The knowledge, opinions and practice of nurses. Journal of Advanced Nursing, 17, 1243-1250.

Mcrtens, R., Morias, J., & Verhamme, G. (1976). A double blind study comparing Daktacort, miconazol, and hydrocortisone in inflammatory skin infections. Dermatologica, 153, 228-235.

Meyer-Rohn, J. (1979). Piadar (Ladar): Ein neues antimykoticum mit dem wirkstoff siccanin [Piadar (Ladar): New antimycotic drug containing the antifungal agent siccanin]. Mykosen, 22, 255-258.

Minelli, L., & Ragher, L. (1989). Avaliacao da eficacia terapeutica da associacao de nistatina com oxido de zinco e plastibase em dermatites intertriginosas [Evaluation of therapeutic efficacy of the association of Nystatin with zinc oxide and plastibase in intertriginous dermatitides]. Revista Brasileira de Medicina, 46, 14-18.

Moulin, G., Gaboriaux, M.C., & Joseph, J.Y. (1983). Pathologie du pli interfcssier [Pathology of the perianal region]. Concours Medical, 105, 353-367.

Peltonen, L., & Havu, V.K. (1981). Debrisan paste in the treatment of exudative intertriginous dermatoses and leg ulcers. Clinical Trials Journal, 18, 353-362.

Radovic-Kovacevic, V., Ratkovic, R., & Milenkovic, A. (1990). Obytin u lecenju povrsnih mkoza koze [Obylin (1% cyclopyroxolamine) in the treatment of superficial skin mycoses]. Medianski Pregled, 43, 329-331.

Reiffers, J. (1981). Essai d’une nouvelle creme antifongique et antibacterienne a base de triclosan (CGP 433-Logamel). [Clinical report of triclosan, a new topical with antifungicidal and antimicrobial activity (CGP 433-Logamel)]. Schweizerische Rundschau fur Medizin Praxis, 70, 1050-1053.

Rex, J, Walsh, T., Sobel, J, Filler, S, Pappas, P., Dismukes, W., et al. (2000). Practice guidelines for the treatment of candidiasis. Clinical Infectious Diseases, 30, 662-678.

Scherwitz, C. (1977). Klinische prufung von econazol haut-milch und creme bei hautmykosen [Clinical study with econazol skin-milk and cream in dermatomycoses]. Zeitschrift far Hautkrankheiten, 52, 117-125.

Schindera, I. (1999). Intertrigo-therapie heute: Behandlungsergebnisse mit hamamelis virgimana [Current intertrigo therapy: Treatment results with Hamamelis virginiana]. Forschende Komplementarmedizin, 6, 31-32.

Schwarz, K., Much, T., & Konzelmann, M. (1975). Poliklinische prufung von econazol bei 594 fallen von hautmykosen. [Study of econazol in 594 outpatients with dermatomycoses]. Deutsche Medizinische Wochenschrift, 100, 1497-1500.

Selden, S. (2001). Intertrigo. eMedicine Journal, 2.

Siboulet, A. (1976). L’econazol spraypoudre en dermato- venereologie [Econazole spray-powder in dermovenereology]. Schweizerische Rundschau fur Medizin Praxis, 65, 977-981.

Somorin, A.O. (1985). Clinical evaluation of tioconazole in dermatophyte infections. Current Therapeutic Research, 37, 1058- 1061.

Steultjens, M., Dekkcr, J., Bouter, L., Van de Nes, J., Cardol, M., & Van den Ende, C. (2002). Occupational therapy for multiple sclerosis (Protocol for a Cochrane Review). Toronto: The Cochranc Library.

Taube, K-M., Duhr, M., Koepke, M., & Haustein, U.-F. (1995). Behandlung von pilzinfektionen der haut mit tioconazol [Treatment of fungal infections of the skin with tioconazole]. Zeitschrift fur Dermatologie, 181, 125- 128.

Torok, L, Varkonyi, V, Podanyi, B., Soos, G., Denes, M., & Kiraly, K. (1979). Vergleichende Untersuchungen mit mycosolon, miconazol, und depersolon-salbe im doppelblindversuch [Comparative studies of Mycosolon, miconazole and depersolone ointments in a double-blind test]. Dermatoloeische Monatsschrift, 165, 788-794.

van Beelen, A. (2001). Prevenue en behandeling van intertrigo: Ont-smelten [Prevention and treatment of intertrigo]. Verpleegkunde Nieuws, 75, 18-21.

van Tulder, M.W., Assendelft, WJ., Koes, B.W., & Bouter, L.M. (1997). Method guidelines for systematic reviews in the Cochrane Collaboration Back Review Group for Spinal Disorders. Spine, 22, 2323-2330.

Venier, A., Carnevali, P., Alessandrini, A., & Urbani, S. (1982). Valutazione clinica controllata di un preparato topico contenente fluocortolone biesterifiacto associato a clorchinaldolo [Controlled clinical trial of a topical formulation of fluocortolone (as privalate and caproate esters) plus chlorquinaldol]. Clinica Europea, 21, 932- 939.

Wanic, A. (1967). Behandlungsversuche einiger Hautkrankheiten mittels chinolin und chinaldinderivaten [Therapeutic tr\ials of various skin diseases using chinoline and chinaldinc derivatives]. Zeitschrift fur Haut- und Geschlechtskrankheilen, 42, 27-32.

Wurster, J. (1993). Die behandlung von intertriginosen dermatosen und windeldermatitis mit einer kombination von econazol und zinkoxid [Treatment of intertriginous dermatosis and diaper dermatitis with a combination of econazole and zinc oxide]. Ars Medici, 83, 792-795.

Patriek Mistiaen, MSN, RN, is a Nurse Researcher, Netherlands Institute of Health Services Research, Utrecht, The Netherlands.

Else Poot, MSN, RN, is a Nurse Researcher, Netherlands Institute of Health Services Research, Utrecht, The Netherlands.

Sophie Hickox, MS, RN, is a Project Advisor, Netherlands Expertise Center for Nursing and Care, Utrecht, The Netherlands.

Christ Jochems, RN, is a Clinical Nurse Specialist, Dermatology, Albert Schweizer Hospital, Dordrecht, The Netherlands.

Cordula Wagner, PhD, is a Senior Researcher, is a Nurse Researcher, Netherlands Institute of Health Services Research, Utrecht, The Netherlands.

Acknowledgment: The authors thank the NIVEL Library for their tremendous assistance in tracing and ordering the literature.

Copyright Anthony J. Jannetti, Inc. Feb 2004

Shuttle Columbia’s wreckage finds final resting place

Shuttle Columbia’s wreckage finds final resting place

For NASA engineers, debris’ assembly holds powerful meaning

By JAMES GLANZ New York Times

Sunday, February 8, 2004

Cape Canaveral, Fla. — The first piece of wreckage lifted from the first trailer to arrive from the fields of Texas and Louisiana was a window frame. Just 10 days before, on the morning of Feb. 1, 2003, astronauts had still been looking through that frame at the glorious promise and the forbidding dangers of space, nearly 100 miles over the dark Pacific.

That window frame now lies propped up to the left of the entrance to a modest room on the 16th floor of the enormous Vehicle Assembly Building.

All six frames for the space shuttle Columbia’s forward windows rest on the same low platform, in approximately their original spacing, looking like smashed eyeglasses lying on the pavement after an accident whose consequences are still too horrible to absorb.

Those mottled, mangled artifacts would alone be enough to transform this room into a haunted place, an attic of woe approached only with reverence and a very human shiver on the nape of the neck. But beyond the window frames is much more than that, an overflowing, and overwhelming, repository of all the physical debris that led investigators to the secret of what brought down the shuttle, along with many of the most emotional reminders of what the Columbia was and who rode it into space.

A year after hot gases breached the damaged coatings on the leading edge of the left wing and produced the disaster, NASA has chosen this room as an aeronautical reliquary of sorts, a final resting place for what one engineer called the “eighth soul” that was lost on that bright Saturday morning after the seven astronauts, the Columbia itself.

Officially, the debris, from charred and shattered thermal tiles to the strongbox-like container that preserved magnetic tapes with crucial sensor information, has been gathered in the service of structural engineers and materials scientists who want to study the physics of re-entry into the atmosphere at hypersonic speeds. But even engineers who spent months focusing on the cold forensics of the debris say the assemblage brings with it other powerful levels of meaning.

Final resting place

“I think it’s fair to say that the pieces that are out are the ones that helped us most in the investigation,” said Michael Leinbach, the shuttle launch director who led the reassembly of the debris. “But it was not intended that way. For me, it brings back a lot of memories, a lot of crushingly bad memories, but also some good memories.”

Still, Leinbach added: “This is not a museum. This is not a display. This is where Columbia will be for the rest of her life.”

Searchers found nearly 84,000 pieces of debris. Most of those, including the damaged crew cabin, will be kept in storage in the Vehicle Assembly Building and not displayed. But any piece of debris can be requested by researchers in academe, in the National Aeronautics and Space Administration or at other institutions, said Scott Thurston, who led the Columbia preservation team.

“Anybody, literally anybody,” Thurston said, “can request a piece of Columbia.”

Artifacts evoke humanity

At 525 feet 10 inches high and covering eight acres, the Vehicle Assembly Building looks impossibly big on the flat Florida marshlands. The dim interior of the building, which is used to mate the shuttle orbiter with its external fuel tanks, has a twilight feel. Tucked into a corner of the 16th floor is the repository, not much more than 100 feet long and perhaps 30 feet across.

After the shuttle windows, which seem to fill the room with a kind of tense unresolved grief, the centerpiece is a patchy reconstruction of the left wing’s leading edge. Black curving chunks of the reinforced carbon-carbon that formed the leading edge have been reassembled like the fragments of a dinosaur in a natural history museum. Held in place by thermoplastic resin forms and white metal stilts, the battered pieces straddle the very section of the edge that was breached.

“This is No. 8 here,” Leinbach said, pointing to a section gnawed to pieces and partly vaporized. “The breach was in the bottom of this panel.”

The bottom is entirely missing, and the inside of all the pieces is spattered with lighter-colored deposits of aluminum and other metal melted by the hot gases that doomed the craft. Parts of the adjacent panel, No. 9, once roughly a quarter-inch thick, had been honed to a razor’s edge by the same gases.

“I’d love to find that piece,” Leinbach said, pointing to the vanished underside of panel No. 8. “But we’re never going to find it.”

In another place, charred tiles from the underside of the left wing, eaten away and striated, were in their original position under a sheet of thermoplastic resin. The striations radiated from the fateful point on Panel 8, giving investigators further confidence that they understood what had happened when the Columbia re-entered the atmosphere.

Other artifacts simply evoke the humanity that perished with the craft. There is a hatch that the crew used to enter Columbia. Four mangled rings from a tunnel connecting the cockpit with the bay. Thruster nozzles. A piece of the nose landing gear with two rubber tires in place. More charred tiles, put back together like a jigsaw puzzle.

Nigeria adopts new population policy to improve quality of life

Nigeria adopts new population policy to improve quality of life

ABUJA, Jan. 28 (Xinhua) — Nigeria has adopted a new population policy which it said would improve the quality of life and standard of living of Nigerians.

After a Federal Executive Council meeting in the capital Abuja on Wednesday, Nigerian Minister of Health Eyitayo Lambo said the policy would replace the 1988 national population policy under which each couple was encouraged to have four children or more.

The 1988 population policy resulted in an increasing population growth rate over the past more than 10 years to reach 3 percent in 2003 in the country.

Nigeria, the most populous African country, now has an estimated population of over 138 million.

The health minister said the target for the 2004 policy was to ensure that Nigeria’s population growth rate was reduced from the current 3 percent per annum by 2015.

Lambo said there was need to cut down population growth rate as it had become evident that without effective control, Nigeria’s population would hit 400 million in the next 21 years.

On how the government would check population surge, Lambo said the use of modern contraceptives would be promoted.

Lambo said the new policy would also encourage Nigerians on the need to have the number of children they could cater for, since there was no ceiling on the number of children per couple in the new policy.

To ensure effective implementation, Lambo said a National Population Council which is chaired by President Olusegun Obasanjo had been set up.

Members of the council include representatives of the National Assembly, the National Planning Commission, Armed Forces, the Police as well as Ministers of Finance, Health, Education, Information, Agriculture, Labor, Environment, Works, Housing, Women Affairs, Justice, Internal Affairs, Science and Technology and Sports.

A Teacher in Space

Educator-astronaut Barbara Morgan will visit the International Space Station in 2003.

Science@NASA — NASA Administrator Sean O’Keefe today announced that Barbara Morgan, the agency’s first Educator Astronaut, has been assigned as a crewmember on a November 2003 Space Shuttle mission to the International Space Station.

Today’s announcement was highlighted with a ceremony at the Maryland Science Center in Baltimore and fulfills the Administrator’s commitment earlier this year to send an educator into space in a renewed mission to inspire a new generation of explorers. Morgan’s flight represents the first of what is expected to be many flights as part of a new Educator Astronaut program, which will be unveiled in early 2003.

“NASA has a responsibility to cultivate a new generation of scientists and engineers,” said Administrator O’Keefe.

“Education has always been a part of NASA’s mission, but we have renewed our commitment to get students excited about science and mathematics. The Educator Astronaut program will use our unique position in space to help advance our nation’s education goals,” he explained.

Morgan’s assigned mission, STS-118, has two primary objectives: the installation of additional truss segments that will increase power and communications to the International Space Station and the delivery of additional supplies for the Station’s crew. Morgan will participate in a number of educational events from space and be actively involved in the flight as a fully trained NASA astronaut.

A native of McCall, Idaho, Morgan was selected in 1985 as the backup candidate for the Teacher in Space program. Following the Challenger accident, the program was suspended and Morgan worked with NASA’s Education Office, meeting with teachers and students across the country to share her space training experiences and their relevance to the classroom and America’s future.

In the fall of 1986 Morgan returned to teaching at McCall-Donnelly Elementary School in Idaho, but continued to travel the country in support of NASA’s education efforts.

In January 1998, she was selected by NASA to complete her astronaut training. For more than a year, Morgan has served as a spacecraft communicator, or CAPCOM, in Mission Control at NASA’s Johnson Space Center in Houston, providing the voice link between the flight control team and crews orbiting in space.

“Barbara’s commitment and dedication to education is an inspiration to teachers across the country,” concluded Administrator O’Keefe. “She embodies the spirit and desire of this agency to get students excited about space again, and I’m pleased that she’ll be able to fulfill that mission from orbit aboard the Space Shuttle and the International Space Station,” he said.

The Science Directorate at NASA’s Marshall Space Flight Center sponsors the Science@NASA web sites. The mission of Science@NASA is to help the public understand how exciting NASA research is and to help NASA scientists fulfill their outreach responsibilities.

On the Net:

NASA

More science, space, and technology from RedNova

Renaming papillary microcarcinoma of the thyroid gland: The Porto proposal

The 12th Annual Cancer Meeting held at the Institute of Molecular Pathology and Immunology of the University of Porto (IPATIMUP), Porto, Portugal, on March 3-5, 2003, was devoted to carcinomas of thyroid follicular cells. During the course of 3 days, a group of basic scientists, pathologists, and clinicians interested in thyroid neoplasia exchanged views about several important aspects of the biology, pathology, and behavior of these tumors. One of the items was the thyroid neoplasm that is currently designated as papillary microcarcinoma. Pollowing a general discussion on the subject and the conclusion that a change in terminology ought to be considered, a working group composed of the 4 individuals listed as authors of this article met for the purpose of exploring the issue in more detail. A consensus was reached by the group to propose a new term for the entity. We would like to present to our fellow pathologists and to the medical community at large the rationale behind our choice, which we refer to as”the Porto proposal.”

Papillary carcinoma is the most common malignant tumor of the thyroid gland. In addition to the clinically detectable cases, the size of which ranges widely, there is a variant currently designated papillary marocamMOWza (PMiC), which has been defined as a papillary carcinoma measuring 1 cm or less in diameter [1,2]. This variant, also known as occult papillary carcinoma, latent papillary carcinoma, small papillary carcinoma, nonencapsulaled thyroid tumor, and occult sclerosing carcinoma [3,4], is an extremely common condition, to the point of having been regarded as “a normal finding” [5]. It practically always represents an incidental finding at autopsy or in thyroid glands removed for other reasons. Its prevalence in the various systematic autopsy studies that have been carried out has ranged between 5.6% and 35.6% [5-10]. The latter figure is from Finland and is based on a detailed study of 101 cases [5]. The authors of that paper estimated that if serial sections of the glands had been taken, the total number of detected tumors would have been no less than 308. These studies have also shown that the prevalence of this tumor increases steeply from birth to adulthood and that it remains relatively constant afterward. When coupled with the known marked differences in incidence that exist between PMiC and clinically detectable papillary carcinoma, this observation suggests that most PMiCs develop in adolescence and young adulthood and that they tend to remain stable afterward, in the sense of growing at a similar rate to the normal gland, unless an additional event were to occur causing the tumor to speed up its rate of growth and become clinically apparent. Taking into account the frequency of PMiC and the rarity of clinically significant papillary carcinoma, the chance of this additional event taking place is obviously very low.

The fact that PMiC shares architectural, cytologic, immunohistochemical, and behavioral features with its larger counterpart has been well documented. In particular, it has been demonstrated that it has the capability to spread to the regional lymph nodes and that under exceptional circumstances it can even metastasize to distant sites [II]. These very rare cases, however, generally precent with metastases. In the case of tumors discovered incidentally, the chances of later metastases developing are extremely low. Indeed, it has been repeatedly demonstrated that under these circumstances the overwhelming majority of these tumors are of no clinical significance. As a corollary, the conclusion has been reached by several groups that no additional therapy is necessary if a PMiC is detected incidentally in a thyroid gland [12,13]. We fully agree with this recommendation and have often made it ourselves in cases that have been sent to us in consultation. However, we are conscious of the fact that the use of the term carcmowza in a pathology report sends to both surgeon and patient a message with a considerable therapeutic, prognostic, psychologic, and financial impact, and we are also aware that these implications are not necessarily tempered by whatever qualifiers and comments one may choose to include in the report. As a result, whenever the diagnosis of PMiC is made, the definite possibility exists that the repercussions of this diagnosis will be far greater that those justified by the biologic potential of the neoplasm. The proposal was therefore discussed at the Porto meeting to rename this entity in a way that would avoid these potential untoward effects while still accurately reflecting its nature. After considering and discarding several alternatives, the term papillary microtumor (PMiT) was chosen, since it was felt that it was the one coming closer to the fulfillment of these requirements. To wit, it indicates the fact that the lesion is of small size, that it is a neoplastic process (remaining purposefully noncommittal about its malignant potential, because while the tumor may show microscopic local invasion, it is clinically benign), and that it belongs to the papillary family of neoplasms.

It should be emphasized that this proposed change in terminology was specifically discussed in connection with the most common situation, i.e., that of a single focus of papillary carcinoma measuring 1. Patient’s age. The recommendation made in this document excludes tumors detected in children and adolescents under the age of 19 years. As pointed out in this journal some years ago in a Guest Editorial written by the Chernobyl Pathologists Group (which included 2 of the authors of the current paper) [14], a significant number of papillary carcinomas with a diameter of less than 1 cm in diameter occurring in this age group show direct extrathyroidal invasion and are associated with distant metastases. It was therefore suggested in that document that the term papillary microcarcinoma (here replaced by PMiT) be employed only for adult patients and that papillary neoplasms occurring in children be designated as papillary carcinomas regardless of size until a study has been carried out in them correlating size and other features to prognosis. We agree with this recommendation, and add that the age limit we have arbitrarily adopted (under 19 years) should be reviewed when further information on adolescents becomes available.

2. Number of tumors. Whenever 2 or more lesions with the appearance of PMiT are detected, the possibility must be considered that they represent intrathyroid spread from a separate primary thyroid carcinoma. If no such primary can be found and the lesions are morphologically typical of PMiT, a diagnosis of multicentric PMiT can be made. More evidence is needed on the appropriate diagnosis and management of cases where 2 or more lesions are present, each individually less than 1 centimeter in diameter, but greater than 1 centimeter when taken together; until such evidence is available, it would be safer not to use the term PMiT under these circumstances.

3. Unusual microscopic appearances. The proposal here outlined applies to the typical case of PMiT and excludes-for the time being and until further information is obtained-those rare instances in which the tumor has features that may be indicative of a potential for an aggressive behavior. This includes cases accompanied by invasion of the thyroid capsule, blood vessel permeation, or tall cell features [15].

4. PMiTs occurring within a benign lesion. Occasionally, lesions fulfilling the criteria for PMiT are found entirely confined within benign thyroid nodules having otherwise typical features of follicular adenoma or hyperplastic (adenomatoid) nodule. It is proposed that these lesions be designated as “PMiT within . . .”. This term should be reserved for those cases in which the PMiT appears as a sharply outlined focus within the benign nodule, rather than the more common situation in which ill-defined multiple areas of nuclear clearing are encountered, the significance of which is beyond the scope of this communication.

5. PMiT found at ultrasound, computed tomography, or magnetic resonance imaging. If a papillary carcinoma of less than 1 cm diameter is found incidentally at radiologie examination performed for some other reason, we believe that it should still be classified as PMiT. Conversely, if the tumor were to be found in the course of an investigation carried out because of the presence or suspected presence of metastases, we do not recommend the use of the term.

We would be remiss if we failed to mention that 2 very similar proposals have been made in the past using the same reasoning and aiming at the same goals. Hazard et al. [3] proposed to designate this lesion as nonencapsulated thyroid tumor because “the surgeon may become unduly alarmed when the pathologist reports the presence of carcinoma.” They added that “this may lead to reoperation, radical dissection of the neck or extensive irradiation, all of which are unnecessary and undesirable.” Harach et al. [5] proposed the term occult papillary tumor “in order to avoid unnecessary operations an\d serious psychologic effects on patients.” However, the tumors are not necessarily nonencapsulated, and are increasingly being found at ultrasound examination and therefore not always “occult.” Semantics aside, it is hoped that the “Porto proposal” here presented, which is an elaboration and reaffirmation of those earlier recommendations, will be considered by the setters of tumor nomenclature in this field and ultimately accepted by the pathology community. It is our expectation that the adoption of this terminology will decrease the danger of overtreatment, minimize the psychologic anxiety engendered by a diagnosis of carcinoma, and maintain unchanged the patient’s eligibility for life insurance. Needless to say, the pathologist should use discretion in applying the term to any tumor with unusual features. Furthermore, the renaming of this tumor as here proposed should not prevent the pathologist from reinforcing the message regarding the generally innocuous nature of PMiT in the form of a written or verbal comment to the clinician whenever such reinforcement is felt necessary.

References

1. Rosai J, Carcangiu ML, DeLellis RA. Tumors of the thyroid gland. Atlas of tumor pathology, Third Scries, Fascicle 5, Washington, DC, pp. 96-100, 1992

2. Hedinger CE, Williams BD, Sobin LH, et al. [WHO] Hislological typing of thyroid tumours, 2nd Edition, Berlin, Springer-Verlag, 1988

3. Haxard HB, CrHe G, Dempsey WS. Nonencapsulated sclcrosing tumors of the thyroid. J Clin Endocr 9:1216-1231, 1949

4. Klinck GH, Winship T. Occult sclcrosing carcinoma of the thyroid. Cancer 8:701-706, 1955

5. Harach HR, Franssila KO, Wasenius V-M. Occult papillary carcinoma of the thyroid. A “normal” finding in Finland. A systematic autopsy study. Cancer 56: 531-538, 1985

6. Bondeson L, Ljungberg O. Occult thyroid carcinoma at autopsy in Malmo, Sweden. Cancer 47:319-323, 1981

7. Fukunaga FH, Yatani R. Geographical pathology of occult thyroid carcinoma. Cancer 36:1095-1099, 1975

8. Nishiyama RH, Ludwig GK, Thompson NW. The prevalence of small papillary thyroid carcinomas in 100 consecutive necropsies in an American population. In DeGroot LJ, (ed): Radiation-associated thyroid carcinoma. Grune & Stratton, New York, pp. 123-135, 1977

9. Sampson RJ. Prevalence and significance of occult thyroid cancer. In DeGroot LJ, (cd): Radiation-associated thyroid carcinoma. Grime & Stratton, New York, pp. 137-153, 1977

10. Sobrinho-Simoes MA, Sambadc MC, Goncalves V. Latent thyroid carcinoma at autopsy: A study from Oporto, Portugal. Cancer 43:1702- 1706, 1979

11. Patchefsky AS, Keller IB, Mansfield CM. Solitary vcrtebral column metastasis from occult sclcrosing carcinoma of the thyroid gland: Report of a case. Am J Clin Pathol 53:596-601, 1970

12. Hubert JP, Kiernan PD, Beahrs OH, McConahey WM, Woolner LB. Occult papillary carcinoma of the thyroid. Arch Surg 115:394-398, 1980

13. Hay ID, Grant CS, van Heerdcn JA, Goellncr JR, Ebersold JE, BergstraJhEJ. Papillary thyroid microcarcinoma: A study of 535 cases observed in a 50-year period. Surgery 112:1139-1147, 1992

14. Williams ED, on behalf of the Chcrnobyl Pathologists Group (Abrosimov A, Bogdanova T, Ito M, Rosai J, Sidorov Y, Thomas GA). Guest Editorial: Two proposals regarding the terminology of thyroid tumors. Int J Surg Pathol 8:181-183, 2000

15. Johnson TL, Lloyd RV, Thompson NW, Hcicrwaltcs WH, Sisson JC. Prognostic implications of the tall cell variant of papillary thyroid carcinoma. Am J Surg Pathol 12:22-27, 1988

juan Rosai, MD,* Virginia A. LiVolsi, MD,[dagger] Manuel Sobrinho- Simoes, MD,[double dagger] and E. D. Williams, MD[sec]

From the * DepartmcnL of Pathology, National Cancer Institute, Milan, Italy; [dagger] Departmenl of Pathology, University of Pennsylvania Medical School; [double dagger] Medical Faculty of Porto and Institute of Molecular Pathology and Immunology of the University of Porto (IPATIMUP), Porto, Portugal; and the [sec]Thyroid Cardnogcnesis Group, Strangeways Research Laboratory, University of Cambridge, Cambridge, England.

Copyright Westminster Publications, Inc. Oct 2003

Building Planets in Cyberspace

Jet Propulsion LaboratoryRecipe: Take a rocky mass [about 12.8 thousand kilometers (nearly 8 thousand miles) wide], add carbon dioxide, water vapor and methane. Place in stable, circular orbit, the same distance from a sunlike star as the distance between Earth and the Sun. Heat to an average of 10 degree Celsius (50 degrees Fahrenheit) for 1 billion years.

Over the next few years, scientists at NASA’s Jet Propulsion Laboratory plan to cook up a series of planets based on recipes like the one above and play around with the ingredients. But they won’t be using real materials — it will all be done in cyberspace.

The ultimate goal is to simulate a plausible range of habitable planets, and to find out how they might appear to planet-finding missions of the future.

Dr. Vikki Meadows is principal investigator of the Virtual Planetary Laboratory, a project that was selected as a new lead team for the NASA Astrobiology Institute to create tools that will simulate a diverse range of planets and life forms.

The Virtual Planetary Laboratory will marshal the best supercomputers available, and a team of 28 researchers from disciplines as varied as statistics and biology, to model a gallery of planetary atmospheres.

The team’s findings will directly influence the development of future space missions such as Terrestrial Planet Finder, which will look for habitable planets around other stars.

“We’re trying to build a terrestrial planet inside a computer,” Meadows says. “This will help us determine what the signatures of life on an extrasolar planet will look like, once we have the technology to study them.”

The closest planetary systems are many light years away, but the faint light the planets emit, if separated into its component frequencies, can provide a wealth of information. By analyzing the colors of radiation detected by Terrestrial Planet Finder, astronomers can look for the signatures of biological products.

These “biosignatures” can provide evidence that the environments on these planets may be able to support life. But what will these as-yet-unseen biosignatures look like? Finding out is part of the challenge, and that’s where the Virtual Planetary Laboratory comes in.

Dr. Cherilynn Morrow of the Space Science Institute of Boulder, Colorado, a member of the team, says the Virtual Planetary Laboratory will help scientists know how to recognize habitable worlds and to discriminate between planets with and without life.

“The Virtual Planet Laboratory is playing a key role in defining how we will conduct our search for living worlds in orbit around other stars of the Milky Way galaxy,” says Morrow, who heads the project’s education and public outreach component.

Currently, scientists are limited to just one model of a habitable planet: Earth. The key to expanding our concept of what constitutes a habitable planet, Meadows says, is to play around with the recipe, trying different combinations of size, composition and location. A world teeming with microbes, for example, could produce an atmosphere rich in methane. And to learn about the plausible range of temperatures at which life might exist, “we’ll model everything from frozen hells to burning hells,” Meadows says.

To help scientists recognize younger Earths, the team will model our home planet as it would have appeared from space billions of years ago, before its atmosphere became rich in oxygen.

An equally important goal of the project is to learn how to recognize what Meadows calls “false positives” — planets that may appear to have life, but don’t. These planets would mimic some of the accepted signs of life, but would produce them using geological and atmospheric processes. Such planets might be distinguishable from inhabited worlds by looking at a broader spectral range, or taking many measurements over a period of time to understand the way these “signatures” change.

In the first phase of the project, the software will be used to re-create planets we’re familiar with: Venus, Earth and Mars. Comparing the models with real data from observations of these planets will tell scientists whether the software is producing accurate simulations. Later stages will produce abiotic, or non-living, planets, and eventually, planets where life has found a foothold.

Meadows stresses that she and her colleagues aren’t looking for “ET the Extraterrestrial.” Their sights are set on life on a lower order — even microbial. “I’m not looking for intelligent life,” Meadows says. “I’m looking for bugs from space.”

On the Net:

NASA

Jet Propulsion Laboratory

More science, space, and technology from RedNova

Eavesdropping on Europa

Sounding Europa On the Cheap: Eavesdropping On Ice

Geological Society of America — Forget drilling. A simpler and cheaper way to search for an ocean under Europa’s glacial surface is to land a solitary electronic ear on the Jovian moon, and listen to the echoes of cracking ice.

By applying a technique already tested on Arctic Sea ice, a single “geophone” listening device could reveal how the icy moon’s surface flexes, cracks and quakes with tidal forces. Just how the resulting vibrations bounce around inside the Moon-sized world could reveal the depth of the ice and extent of the potentially life-sustaining liquid ocean underneath.

Makris will present the advantages of putting an ear to Europan ice at the annual meeting of the Geological Society of America on Wednesday, October 30, in Denver, CO.

“In a way, it is an elegant approach,” says Nick Makris, an acoustical oceanographer and associate professor at the Massachusetts Institute of Technology. And with the funding for a US Europa lander mission currently in limbo, a simpler, lower-cost approach may stand a better chance of surviving budgetary cuts and actually reaching the mysterious ice world in the foreseeable future.

The principle behind the proposal is the same as that employed by ships equipped with echo-sounding bathometers, explains Makris. Bathometers have a single source of sound and then listen with a single “ear” for that sound’s echoes. By analyzing the echoes according to what’s known about the speed of sound through various materials, depths can be determined.

On Europa the sounds and seismic vibrations will not be generated by the geophone, but by the natural cracking and snapping of the ice every few days as the moon reaches the most elongated part of its oblong, 3.5-day orbit around Jupiter.

Like the Earth’s Moon, Europa keeps the same side facing its planet. But during the extreme portion of its orbit there is a tendency for Europa to shimmy a bit from side to side, causing tidal stress within the bulging ice crust that faces the giant planet.

“One scientist has described Europa as creaking like a ship,” said Makris. Exactly how much creaking and cracking goes on is unknown, he says, and it will be the first task of a geophone to find out.

Models of Europa predict that many of the cracks now seen on its surface were probably created by the tidal forces and so are probably still being created, says Makris. Although no changes in cracks have been spotted in either Voyager or Galileo imagery, neither spacecraft had the visual resolution to detect the smaller cracks that probably grow and change on a daily basis, he says.

If there is too much noise, in fact, a lone geophone could be less useful, says Makris. Constant groaning and popping would make it hard determine which snap is related to which echoes, and reveal little about the moon’s interior.

What would be ideal are a few really loud explosive cracking noises or a few meteor impacts every few days. Those would be easy for a geophone to detect, along with the seismic reflections as vibrations bounce revealingly off features within the planet.

Current rough estimates put Europa’s icy crust at about 20 kilometers thick with an ocean beneath that is at least six kilometers deep. That’s about twice as deep as Earth’s open ocean depths (not counting deep sea trenches).

The solitary geophone technique has already been tested on Arctic ice, Makris says, but the ice depth there is just a small fraction of what may separate Europa’s ocean from its surface. Also, it is winds and ocean currents that shift the Arctic ice and generate the natural noises, not tidal forces.

To perform a better field test of the geophone technique, Makris and his colleagues hope to collaborate with NASA and venture to the Antarctic. There the frozen Lake Vostok and other Antarctic deep ice sheets provide more Europa-like conditions, he says.

Despite its promise of detecting the structure of Europa, one thing a geophone cannot do is look for evidence of life under the ice, Makris points out. That will still require the far more complicated and inevitably more expensive drilling technologies that are being studied and developed by other researchers.

On the Net:

Geological Society of America

NASA

More science, space, and technology from RedNova

Research philosophy: Towards an understanding

In this paper, Frank Crossan argues that the distinction between quantitative and qualitative philosophies and research methods is sometimes overstated, and that triangulation of methods in contemporary research is common. It is, therefore, important to understand the strengths and weaknesses of each approach, and this paper aims to provide the novice researcher with a basis for developing that understanding. A descriptive analysis of the philosophies of positivism and post-positivist thinking in relation to research methodology is presented both as an introduction to the philosophical basis of research, and as a sound basis from which to discuss the ‘quantitative-qualitative’ debate.

key words

research methods

research philosophy

positivism

post-positivism

quantitative

qualitative

Introduction

Positivism adopts a clear quantitative approach to investigating phenomena, as opposed to post-positivist approaches, which aim to describe and explore in-depth phenomena from a qualitative perspective. This paper aims to introduce the philosophical basis of research by, firstly, providing a descriptive analysis of positivist and post-positivist philosophies, and secondly, by providing a sound basis from which to discuss the ‘quantitative-qualitative’ debate in relation to research methods. It begins by exploring the reasons for studying philosophical issues in general and then more specifically in relation to research methodology. The philosophies of positivism and post-positivist thinking are explored using literature drawn from a variety of disciplines and sources to identify the key components and elements of both.

Understanding philosophy

There are numerous reasons why an understanding of philosophical issues is important. Hughes (1994) asks: ‘…what is it about philosophy that gives it this seemingly vital role in human intellectual affairs? Is this simply a contingent fact of our intellectual history, or is there something distinctive about philosophy itself which gives it this authoritative place?’ In answer to this question it could be argued that it is the nature of philosophical questions that best demonstrates the value of understanding philosophy. It is the uncomplicated style and innocent way of questioning, which produces confusion and instability in our assumptions and ideas about the world, that makes the study of philosophy of special benefit (Smith 1998). The indirectness and circular nature of philosophical questioning in itself is helpful, as it often encourages in-depth thinking, and generates further questions in relation to the topic under consideration. Clarifying assumptions related to personal values is also seen as useful when planning a research study. According to Proctor (1998), individuals rarely take time to do this in everyday life, but exploring basic personal beliefs could assist in understanding wider philosophical issues, notably ‘…the interrelationship between ontological (what is the nature of reality?), epistemological (what can be known?), and methodological (how can a researcher discover what she or he believes can be known?) levels of enquiry’ (Proctor 1998).

Easterby-Smith et al (1997) identify three reasons why the exploration of philosophy may be significant with particular reference to research methodology:

* Firstly, it can help the researcher to refine and specify the research methods to be used in a study, that is, to clarify the overall research strategy to be used. This would include the type of evidence gathered and its origin, the way in which such evidence is interpreted, and how it helps to answer the research questions posed.

* Secondly, knowledge of research philosophy will enable and assist the researcher to evaluate different methodologies and methods and avoid inappropriate use and unnecessary work by identifying the limitations of particular approaches at an early stage.

* Thirdly, it may help the researcher to be creative and innovative in either selection or adaptation of methods that were previously outside his or her experience.

The ongoing ‘quantitative/qualitative’ debate is fogged by lack of coherent definitions and by a focus on methods rather than an exploration of underlying philosophy. According to Clarke (1998), research methods can be described, considered and classified at different levels, the most basic of which is the philosophical level. The methodological distinctions most commonly used focus on the differences between quantitative research, which is generally associated with the philosophical traditions of positivism, and qualitative research, most commonly allied with post-positivist philosophy (Polit et al 2001). The philosophical level of a research method relates to its assumptions based on the most general features of the world, encompassing such aspects as the mind, matter, reality, reason, truth, nature of knowledge, and proofs for knowledge (Hughes 1994). If we, for example, examine how research based on a positivist philosophy differs from that based on a post- positivist philosophy, the appropriateness to the research needs is simplified and the nature of the most appropriate approach clarified. From this we can see that the choice of approach may be dependent on the context of the study and the nature of the questions being asked. The researcher’s experience, understanding of philosophy, and personal beliefs may also have some bearing on the method adopted (Denzin and Lincoln 1994). Shih (1998) expands this idea and lists four areas for consideration when deciding on a research method: the philosophical paradigm and goal of the research, the nature of the phenomenon of interest, the level and nature of the research questions, and practical considerations related to the research environment and the efficient use of resources.

Proctor (1998) considers that consistency between the aim of a research study, the research questions, the chosen methods, and the personal philosophy of the researcher is the essential underpinning and rationale for any research project. She indicates that before any decision on research method can be made an understanding of the two extremes of research philosophy, i.e. positivism and post- positivism, need to be explored and understood.

It is important to note that while quantitative research methods (or positivist philosophies) and qualitative methods (or post- positivist philosophies) are often seen as opposing and polarised views they are frequently used in conjunction. The distinction between the philosophies is overstated (Webb 1989) and triangulation of methods in current day research is common (Polit et al 2001). It is very important, therefore, that an in-depth understanding of the strengths and weaknesses of both approaches and their underlying philosophy is obtained. Clarke (1998) emphasises this point:

‘Though some distinction between methods is well placed … it is being acknowledged that philosophically the qualitative and quantitative paradigms are not as diverse or mutually incompatible as often conveyed. Staunch identification of methods with particular paradigms may not be as accurate, or even as useful, an endeavour as past trends would indicate’.

The nature of positivism

What could be described as the traditional scientific approach to research has its underpinnings in positivist philosophy. From the literature it is clear that positivism can be defined in various ways. Smith (1998) provides a useful insight into positivist thinking within social sciences with this description: ‘Positivist approaches to the social sciences . . . assume things can be studied as hard facts and the relationship between these facts can be established as scientific laws. For positivists, such laws have the status of truth and social objects can be studied in much the same way as natural objects’.

The ideas associated with positivism have been developed and challenged, stated, re-examined and re-stated over time. Outhwaite (1987) suggests that there are three distinct generations of positivist philosophy. These generations follow on from the period generally know as the ‘Enlightenment’, which allowed the contemplation of social life to break away from religious interpretations and establish human beings as the main protagonists in the development and accumulation of scientific knowledge. The first generation produced philosophers such as Locke, Hume and Comte who were associated with the early traditions of positivism established in the 18th and 19th centuries (Comte 1853, Hume 1784). The next generation was logical positivism, associated with philosophers of the early 20th century collectively known as the Vienna Circle (Ayer 1936, Carnap 1932). The next generation, commonly associated with Karl Hemple (1965), developed in the post- war period.

The basic reasoning of positivism assumes that an objective reality exists which is independent of human behaviour and is therefore not a creation of the human mind. Auguste Comte (1853) suggests that all real knowledge should be derived from human observation of objective reality. The senses are used to accumulate data that are objective, discernible and measurable; anything other should be rejected as transcendental. The positivists’ antipathy to metaphysics within scientific enquiry is well illustrated by David Hume:

‘If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, does it contain any abstract reasoning concerning quali\ty or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion’ (Hume 1748/ 1984).

The importance of induction and verification, and the establishment of laws, are stressed by logical positivists and in this respect differ from the earlier tradition of positivism. The stated aim of the logical positivists is to cleanse scientific knowledge of speculative and subjective viewpoints. It endeavours to do this by the use of mathematics and formal logic (as a branch of mathematics) to provide analytical statements about the observed world using the process of induction as a means of establishing generalisations and laws. Post Second World War standard positivists such as Hemple (1965) focused on the need for reasoning that moves from theoretical ideas, or a set of given premises, to a logical conclusion through deductive thinking. That is, through the mental process of developing specific predictions from general principles, and through research establishing whether or not the predictions are valid.

The general elements of positivist philosophy have a number of implications for social research based on this approach. These implications, adapted from Bond (1989), Easterby-Smith et al (1997), and Hughes (1994) are:

* Methodological: all research should be quantitative, and that only research which is quantitative can be the basis for valid generalisations and laws

* Value-freedom: the choice of what to study, and how to study it, should be determined by objective criteria rather than by human beliefs and interests

* Causality: the aim should be to identify causal explanations and fundamental laws that explain human behaviour

* Operationalisation: concepts need to be operationalised in a way that enables facts to be measured quantitatively

* Independence: the role of the researcher is independent of the subject under examination

* Reductionism: problems are better understood if they are reduced to the simplest possible elements.

A major criticism of the positivist approach is that it does not provide the means to examine human beings and their behaviours in an in-depth way. Ayer (1969) questions the use of positivist and empirical approaches to the study of human behaviour, and suggests that it may be something about the ‘nature of men’ that makes the establishment of laws and ability to generalise impossible. Parahoo (1997) provides the following example:

‘In physics, it is possible …to formulate laws relating to… the expansion of metal when heated. From such laws, the amount of expansion that will occur in particular circumstances can be predicted. However, when a man loses his job and becomes depressed, it does not mean that he will be depressed each time he loses his job, nor can we say that everyone who loses his job becomes depressed’ (Parahoo 1997).

Humans are not ‘objects’, and are subject to many influences on behaviour, feelings, perceptions, and attitudes that positivists would reject as irrelevant and belonging to the realms of metaphysics. Critics of the positivist approach argue that it yields useful but limited data that only provide a superficial view of the phenomenon it investigates (Bond 1993, Moccia 1988, Payle 1995).

In summary, the positivist philosophy embraces a conception of truth in which verifiable statements concur with the ascertainable facts of reality. Truth is therefore not dependent on belief alone but on belief that can be verified through examination and observation of external reality. Speculation and assumptions related to knowledge based on the metaphysical are discarded. The exploration and examination of human behaviours such as feelings are beyond the scope of positivism. The elements and focus of positivism have a profound effect on those involved in social research, and on the continuing quantitative-qualitative debate.

Post-positivism

Following the recognition by scholars such as Jacob Bronowski (1956) and Karl Popper (1959) that within the world of modern science the elementary justifications of positivism were no longer entirely defensible, a new philosophy emerged, that of post- positivism. Post-positivism provides an alternative to the traditions and foundations of positivism for conducting disciplined inquiry. For the post-positivist researcher reality is not a rigid thing, instead it is a creation of those individuals involved in the research. Reality does not exist within a vacuum, its composition is influenced by its context, and many constructions of reality are therefore possible (Hughes 1994). Proctor (1998) suggests that among the various factors that influence reality construction, culture, gender, and cultural beliefs are the most significant. They recognise the intricate relationship between individual behaviour, attitudes, external structures, and socio-cultural issues. It follows then that objective reality as proposed by positivist philosophy can be seen as only one aspect or dimension of reality. In describing the nature of post-positivist philosophy, Forbes et al (1999) suggest that post-positivism is concerned with establishing and searching for a ‘warranted assertibility’, that is, evidence that is valid and sound proof for the existence of phenomena (Philips 1990). This is in contrast to the positivist approach of making claims to absolute truth through the establishment of generalisation and laws. Popper (1959) questioned the positivist claims to truth and scientific knowledge through the process of induction. As Doyal (1993), a student and colleague of Popper, explains: ‘Popper argued that certainty or even high probability in knowledge was an illusion because given the universal claims of scientific theories we can never prove them on the basis of our particular experiences. There may always be some potential observation or experiment that might demonstrate that what we had previously thought to be true was, in fact false’ (Doyal 1993).

For Popper, falsification, that is, disproving of theories and laws, was much more useful than verification, as it provided more purposeful research questions and practices (Easterby-Smith et al 1997). The ideas of ‘truth’ and ‘evidence’ are allied mainly to positivist philosophy. The debate, which centres on verification and falsification, fits well within the positivist view. However, there are lessons for the researcher adopting a post-positivist approach. Popper (1969) asks the researcher to be intentionally critical, to test ideas against the evidence to the limit and to avoid being dictatorial in research. Smith (1998) suggests that falsification is as much an attitude to research as a set of methodological procedures. While post-positivism continued to consider the metaphysical as being beyond the scope of science, it was increasingly accepted by post-positivists that although a real world driven by natural causes exists, it is impossible for humans to truly perceive it with their imperfect sensory and mental capacity. From a realist standpoint it is advocated that unobservable phenomena have existence and that they can be used to explain the functioning of observable phenomenon (Guba 1990, Schumacher and Gortner 1992). According to Letourneau and Allen (1999) post- positivist approaches ‘give way’ to both qualitative and quantitative methods. This is described as critical multiplism (Guba and Lincoln 1998). Critical implies that, as in positivism, the need for rigour, precision, logical reasoning and attention to evidence is required, but unlike positivism, this is not confined to what can be physically observed. Multiplism refers to the fact that research can generally be approached from several perspectives. Multiple perspectives can be used to define research goals, to choose research questions, methods, and analyses, and to interpret results (Cook 1985).

The limitations of post-positivist approaches generally relate to the interactive and participatory nature of qualitative methods. Parahoo (1997) suggests that this is the main weakness and is due to the proximity of the researcher to the investigation. Mays and Pope (1995) summarise the main criticisms as:

‘Firstly, that qualitative research is merely an assembly of anecdote and personal impressions, strongly subject to researcher bias; secondly, it is argued that qualitative research lacks reproducibility – the research is so personal to the researcher that there is no guarantee that a different researcher would not come to radically different conclusions; and, finally, qualitative research is criticised for lacking generalisability’.

In summary, post-positivist approaches assume that reality is multiple, subjective, and mentally constructed by individuals. The use of flexible and multiple methods is desirable as a way of studying a small sample in depth over time that can establish warranted assertibility as opposed to absolute truth. The researcher interacts with those being researched, and findings are the outcome of this interactive process with a focus on meaning and understanding the situation or phenomenon under examination.

Conclusion

This paper has provided a descriptive analysis of the philosophies of positivism and post-positivist thinking in relation to research methodology, and has identified the main elements of both approaches. Positivism adopts a clear quantitative approach to investigating phenomena as opposed to post-positivist approaches, which aim to describe and explore in depth phenomena from a qualitative perspective. As already stated, while quantitative and qualitative research methods are often seen as opposing and polarised views, they are frequently used in conjunction with one another. According to some scholars the distinction between the philosophies is overstated (Webb 1989) and triangulation of methods in current day research is common (Polit 2001). It i\s very important, therefore, that an in-depth understanding of the strengths and weaknesses of both approaches and their underlying philosophy is obtained.

references

Ayer AJ (1936/1990) Language, Truth and Logic. Harmondsworth, Penguin.

Bronowski J (1956) The Common Sense of Science, London, Pelican.

Bond S (1993) Experimental research nursing: necessary but not sufficient. In: Kitson A (Ed) Nursing. Art and Science. London, Chapman and Hall.

Carnap R (1932/1995) The Unity of Science. Bristol, Thoemmes Press.

Clarke AM (1998) The qualitative-quantitative debate: moving from positivism and confrontation to post-positivism and reconciliation. Journal of Advanced Nursing. 27, 6, 1242-1249.

Comte A (1853/1971) The positive philosophy. In: Thompson K and Tunstall J (Eds) Sociological Perspectives. Harmondworth, Penguin.

Cook T (1985) Postpositivist critical multiplism. In: Shortland R and Mark M (Eds) Social Sciences and Social Policy. London, Sage.

Denzin NK, Lincoln YS (1998) The Landscape of Qualitative Research. London, Sage.

Doyal L (1993) Discovering knowledge in a world of relationships. In: Kitson A (Ed) Muring. Art and Science. London, Chapman and Hall.

Easterby-Smith M et al (1997) Management Research: an Introduction. London, Sage.

Forbes DA et al (1999) Warrantable evidence in nursing science. Journal of Advanced Nursing. 29, 2, 373-379.

Guba EG (1990) The alternative paradigm. In: Guba EG (Ed) The Paradigm Dialog. Newbury Park, Saqe.

Guba EG, Lincoln YS (1998) Competing paradigms in quantitative research. In: Denzin NK, Lincoln YS (Eds) The Landscape of Qualitative Research. London, Sage.

Hemple CG (1965) Aspects of Scientific Explanation. New York, Free Press.

Hughes J (1994) The Philosophy of Social Research. Essex, Longman.

Hume D (1784/1984) Enquiry Concerning Human Understanding. New York, Bobbs-Merril Co.

Letourneau N, Allen M (1999) Post-positivistic critical multiplism: a beginning dialogue. Journal of Advanced Nursing. 30, 3, 623-630.

Mays N, Pope C (1995) Researching the parts that other methods cannot reach; an introduction to qualitative methods in health and health services research. British Medical Journal. 311, 42-45.

Moccia P (1988) A critique of compromise: beyond the methods debate. Advances in Nursing Science. 10, 4, 1-9.

Outhwaite W (1987) New Philosophies of Social Science: Realism, Hermeneutics and Critical Theory. Basingstoke, Macmillan.

Parahoo AK (1997) Nursing Research, Principles, Process, and Issues. London, MacMillan.

Payle JF (1995) Humanism and positivism in nursing. Journal of Advanced Nursing. 22, 979-984.

Philips D (1990) Post-positivistic science myths and realities. In Guba E (Ed) The Paradigm Dialog. Newbury Park, Sage.

Polit DF et al (2001) Essentials of Nursing Research: Methods, Appraisal and Utilisation. Philadephia, Lippincott.

Popper K (1959) The Logic of Scientific Discovery. London, Hutchinson.

Proctor S (1998) Linking philosophy and method in the research process: the case for realism. Nurse Researcher. 5, 4, 73-90.

Schumacher KL, Gortner SR (1992) (Mis)conceptions and reconceptions about traditional science. Advances in Nursing Science. 14, 4, 1-11.

Shih FJ (1998) Triangulation in nursing research: issues of conceptual clarity and purpose. Journal of Advanced Nursing. 28, 3, 631-641.

Smith MJ (1998) Social Science in Question. London, Sage.

Webb C (1989) Action research: philosophy, methods and personal experience. In: Kitson A (Ed) Nursing. Art and Science. London, Chapman and Hall.

Frank Crossan MN, BA, DipN, RGN, School Director, Planning and Operations, School of Nursing, Midwifery and Community Health, Glasgow Caledonian University, UK

Copyright RCN Publishing Company Ltd. 2003

Advantages of confidence intervals in clinical research

Clinical research often uses techniques of statistical inference to determine the ‘statistical significance’ of the results, that is, how likely is it that the results obtained by the research methods may be ascribed to chance, rather than some effect of an intervention. Similar methods are also used in epidemiologic studies of risk factors to identify associations between exposure to the risk factor(s) and a disease or other health-related outcome. Even in descriptive studies, statistical inference may be used to identify differences between subgroups in the population that is being described. Inferences are often drawn, and statistical significance established by using a method known as hypothesis testing. A research hypothesis is established-usually stating that there is some difference between or among the groups studied-and the observed data are analyzed in order to decide whether to accept or reject the corresponding null hypothesis of no difference. Hypothesis testing has become de rigueur in clinical research, but its value as a primary means of analysis has been questioned.1-4 The purpose of this paper is to describe confidence intervals (CIs) as a statistical tool in clinical research and explain their utility as an alternative to hypothesis testing.

CONVENTIONAL USES OF HYPOTHESIS TESTING

Hypothesis testing results in a yes-no decision regarding the authenticity of the research findings. The decision to accept or reject the null hypothesis is based on a statistical test that yields a probability (the p-value) that the observed results are attributable to chance, in other words, random variation. The calculated p-value is compared to a probability value, alpha, which is traditionally set at .05. If the calculated p-value is less than alpha, the null hypothesis is rejected in favor of the alternative hypothesis. The value of alpha represents the probability of a type I error, which occurs when the null hypothesis is rejected when it is true. A ‘nonsignificant’ or ‘negative’ finding is interpreted to mean that any difference or association that may have been observed is not a true difference because it can be attributed to random variation of the measure in the population.

Several approaches to reporting the results of hypothesis testing have been used in the literature. The most basic and least informative approach is to merely report that the results are significant or nonsignificant based on the predetermined alpha level. A more informative approach is to report that the results are significant at or below some alpha value, for example p Limitations of Hypothesis Testing

Hypothesis testing as a method of determining significance in research has its origins in the agricultural work of R. A. Fisher in the 1920s.4,5 Fisher originated the methods of randomized assignment and of using the p-value to establish significance, arbitrarily selecting the probability of .05 as a threshold because he felt that it was ‘convenient.’4 Since that time, Fisher’s method of hypothesis testing using an alpha value of .05 has become entrenched in scientific literature, despite the feeling of many authorities, including Fisher himself, that this inflexible practice is not warranted.4

The limits of hypothesis testing as it has been used throughout much of the clinical research literature, become evident when issues like sample size, statistical power, and effect size are considered. Power is related to the value of beta (power = 1 – beta), which is the probability of a type II error (defined as acceptance of a null hypothesis which is false). For example, a study with a .10 (or 10%) probability of type II error has a power of .90 (or 90%). Effect size is related to the difference in a measure that is deemed to be clinically significant.6 Each of these concepts is crucial to the understanding of research literature because they too, are represented as terms in the equations that are used to calculate the numbers that allow us to make judgments about research findings. Using hypothesis testing to determine a study’s significance relative to alpha conceals the fact that the value of alpha is balanced by power, sample size, and effect size in any given investigation. These issues typically are considered prior to beginning a study, where an investigator must specify alpha, power, and some minimum clinically significant difference in a measure in order to calculate an adequate sample size.

Often, clinical researchers do not have the luxury of obtaining large sample sizes to insure adequate power to detect small but clinically important differences. Studies with small sample sizes (small-n studies) may result in ‘negative’ findings based on hypothesis testing, yet these findings may have clinical significance.7 The negative findings of small-n studies may be misleading when hypothesis testing methods are rigidly applied, causing us to ignore potentially useful interventions.2 Clearly, there is a need for an alternative to hypothesis testing that permits a broader and more flexible interpretation of research findings, and where the nuances of a study’s findings are not obscured by a binary decision regarding significance.

CONFIDENCE INTERVALS

An alternative approach is available in the use of confidence intervals (CIs). A CI is a range of values, calculated from the observed data, which is likely to contain the true value at a specified probability. The probability is chosen by the investigator(s), and it is equal to 1 – alpha. Thus for an investigation that uses an alpha of .05, the corresponding interval will be a 95% CI. Confidence intervals provide information that may be used to test hypotheses, as well as additional information related to precision, power, sample size, and effect size.

Methods for calculating CIs vary according to the type of measure (mean, difference between rates, odds ratio, etc.) around which the CIs are constructed. It is beyond the scope of this article to specify formulas for calculating CIs. Interested readers may find these methods elsewhere.1,3,6,8 In general however, the interval is computed by adding and subtracting some quantity from the point estimate, which is the value of the target measure that is calculated from the data. Calculation of this quantity requires at a minimum the standard error, or a related measure, and a value related to alpha, such as a t- or Z-statistic.1,3,6

A CI may be constructed around a point estimate of a continuous variable such as a mean. For example, Berry and colleagues measured six-minute walk distance in a randomized clinical trial of long- term versus short-term exercise in participants with chronic obstructive pulmonary disease. At the end of the trial, participants who were involved in long-term (18 months) exercise walked a mean distance of 1,815 feet, with a 95% CI of 1,750 – 1,880 feet, and participants in the short-term (3 months) program walked a mean distance of 1,711 feet, with a 95% CI of 1,640 – 1,782 feet.9 The interpretation of the CIs is that the data are consistent with a 95% probability that the true mean falls between 1,750 and 1,880 feet for the long-term exercise group while in the short-term exercise group, the true mean falls between 1,640 and 1,782 feet.

Confidence intervals also may be constructed around a point estimate representing a categorical variable, such as the proportion of individuals who respond favorably to an intervention, and around epidemiologic measures of effect such as a relative risk or odds ratio. For example, Pereira and associates10 studied health outcomes in women 10 years after an exercise (walking) intervention. They calculated a relative risk of 0.18 (95% CI = 0.04-0.80) for heart disease in women who participated in the intervention, indicating a strong protective association.10 In this case, the Cl indicates that there is a 95% probability that the true relative risk is somewhere between 0.04 and 0.80.

CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

Although CIs may be used for hypothesis testing of group differences in continuously measured variables, in practice this is rarely done. More commonly, CIs are used to test hypotheses involving proportions and ratio measures of effect. When considering a 95% CI around a relative risk, an investigator notes whether the CI includes the null value of the ratio, which for a relative risk is one. A CI that includes the null value is equivalent to a p- value that exceeds the specified value of alpha. For example, in the investigation of long-term outcomes following the walking intervention cited above, the relative risk for high blood pressure was 0.90 (95% CI = 0.47-1.74).10 Because this CI includes the null value of 1, a hypothesis test would accept the null hypothesis of no difference in high blood pressure risk between intervention and control groups. In contrast, the association between the walking intervention and heart disease (relative risk = 0.18, 9\5% CI = 0.04- 0.80) is interpreted as statistically significant because it does not contain the null value for relative risk. For this association, the null hypothesis of no difference in risk would be rejected in favor of the alternative hypothesis that the intervention protects against heart disease.

Confidence Intervals: Beyond Hypothesis Testing

To construe CIs as merely a different way to test hypotheses, however, would ignore other important information conveyed. A CI informs the investigator and the reader about the power of the study and whether or not the data are compatible with a clinically significant treatment effect. The width of the CI is an indication of the precision of the point estimate – a narrower CI indicates a more precise estimate, while a wide CI indicates a less precise estimate. Precision is related to sample size and power such that the larger the sample size, and the greater the power, the more precise will be the estimate of the measure.8,11 Assessing the width of the CI is particularly useful in studies with small sample sizes. In small-n studies with ‘negative’ findings, where hypothesis testing fails to find statistically significant treatment effects or associations, point estimates with wide CIs that include the null value may be consistent with clinically significant findings.8 This is because hypothesis testing alone fails to account for statistical power and sample size. Since power is equal to 1 – beta, it follows that studies with small sample sizes, and low statistical power, have a higher probability of failing to identify true treatment effects or associations (a type II error). As mentioned previously, type II errors can have adverse consequences in clinical research, particularly where large sample sizes are simply not feasible.2

The lower limit of a CI, which is the limit closest to the null value, is typically used for hypothesis testing. The higher limit, the limit furthest from the null value, can be used to indicate whether or not a treatment effect or association is compatible with the data.11 In any investigation, the true value of the variable under study is unknown, but it is estimated by the data. A confidence interval around the point estimate indicates a range of credible values of the variable that is consistent with the observed data. If the interval contains the value of a variable that corresponds to a clinically significant treatment effect or association, the study has not ruled out that such an effect exists, even if the finding ‘failed’ a hypothesis test. The evidence for a treatment effect or association may not be conclusive, but the finding need not be rejected unequivocally as the logic of hypothesis testing demands. Further study using larger sample sizes or meta-analysis may reveal a positive effect. This approach to analysis is more accommodating to thoughtful, yet judicious interpretation, allowing authors and readers to reflect on the nuances of the data as they consider the meaning of a study.

For example, White et al12 examined outcomes in patients with severe chronic obstructive pulmonary disease who participated in pulmonary rehabilitation or received advice and recommendations about exercise. They found no statistically significant differences in quality of life outcomes between the 2 groups, but the confidence intervals around these measures allowed them to suggest that some of their findings approached clinically significant differences. These authors acknowledge that recruitment difficulties lowered the sample sizes they were able to obtain, hence lowering the power of their study to detect statistically significant differences.12 This study illustrates some of the difficulties in conducting and interpreting research in populations with rare or severe conditions, and how the use of confidence intervals can assist in the interpretation of otherwise negative findings.

Confidence intervals also provide a more appropriate means of analysis for studies that seek to describe or explain, rather than to make decisions about treatment efficacy. The logic of hypothesis testing uses a decision-making mode of thinking which is more suitable to randomized, controlled trials (RCTs) of health care interventions. Indeed, hypothesis testing to determine statistical significance was initially intended to be used only in randomized experiments5 such as RCTs which are typically not feasible in clinical research involving identification of risk factors, etiology, clinical diagnosis, or prognosis.13 Use of CIs permits hypothesis testing, if warranted, but it also allows a more flexible approach to analysis that accounts for the objectives of each investigation in its proper context.

SOME CAVEATS IN THE INTERPRETATION OF RESULTS

As we consider the relative utility of hypothesis testing and CIs in the interpretation of research studies, it is important to appreciate the limits inherent in any statistical analysis of data. A fundamental assumption in analysis is that measurements are without bias, ie, measurement error or misclassification that is systematic or nonrandom. Neither hypothesis testing nor the use of CIs can correct for bias, which may lead to erroneous conclusions based on the observed data. Readers are also reminded that determination of statistical significance does not imply that results are clinically significant. Because of the interrelationship of alpha, power, effect size, and sample size, studies with large sample sizes may produce statistically significant results, even if a difference between groups (effect size) or an association is small.8 Determination of clinical significance requires additional interpretation based on clinical experience and prior literature.

CONCLUSION

Confidence intervals permit a more flexible and nuanced approach to analysis of research data. Not only do CIs enable investigators to test hypotheses about their data, they are also more informative about such important features as sample size and the precision of point estimates of group differences and associations. Confidence intervals also are useful in the interpretation of studies with small sample sizes, allowing researchers and consumers of scientific literature to draw more meaningful conclusions about the clinical significance of such studies. Increased use of CIs by researchers and journal editors along with improved understanding of CIs on the part of clinicians will help us avoid unnecessarily rigid interpretation of clinical research as we move toward evidence- based practice.

REFERENCES

1. Sim J, Reid N. Statistical inference by confidence intervals: Issues of interpretation and utilization. Phys Ther. 1999;79:186- 195.

2. Ottenbacher KJ, Barrett KA. Statistical conclusion validity of rehabilitation research. Am J Phys Med Rehabil. 1990;69:102-107.

3. Simon R. Confidence intervals for reporting results of clinical trials. Ann Intern Med. 1986;105:429-435.

4. Feinstein AR. P-values and confidence intervals: Two sides of the same unsatisfactory coin. J Clin Epidemiol. 1998;51:355-360.

5. Salsberg D. The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century. New York, NY: WH Freeman and Co.; 2001.

6. Portney LG, Watkins MP. Foundations of Clinical Research: Applications to Practice 2nd ed. Upper Saddle River, NJ: Prentice Hall Health; 2000.

7. Goodman SN, Berlin JA. The use of predicted confidence intervals when planning experiments and the misuse of power when interpreting results. Ann Intern Med. 1994;121:200-206.

8. Hennekens CH, Buring JE. Epidemiology in Medicine. 1st ed. Boston, Mass: Little, Brown and Company; 1987.

9. Berry MJ, Rejeski WJ, Adair NE, et al. A randomized controlled trial comparing long-term and short-term exercise in patients with chronic obstructive pulmonary disease. J Cardiopulm Rehabil. 2003;23:60-68.

10. Pereira MA, Kriska AM, Day RD, et al. A randomized walking trial in postmenopausal women: Effects on physical activity and health 10 years later. Arch Intern Med. 1998;158:1695-1701.

11. Smith AH, Bates MN. Confidence limit analysis should replace power calculations in the interpretation of epidemiologic studies. Epidemiology. 1992;3:449-452.

12. White RJ, Rudkin ST, Harrison ST, et al. Pulmonary rehabilitation compared with brief advice given for severe chronic obstructive pulmonary disease. J Cardiopulm Rehabil. 2002;22:338- 344.

13. Feinstein AR, Horwitz RI. Problems in the “evidence” of evidence-based medicine. Am J Med. 1997;103:529-535.

Gary Brooks, PT, DrPH, CCS

Associate Professor, School of Health Professions, Grand Valley State University and Research Associate, Grand Rapids Medical Education and Research Center, Grand Rapids, MI

Address correspondence to: Gary Brooks, PT, DrPH, CCS, Grand Rapids Medical Education and Research Center, 1000 Monroe Ave, NW. Grand Rapids, MI 49503 ([email protected]).

Copyright Cardiopulmonary Physical Therapy Journal Sep 2003

What does education really do?

Educational Dimensions and Pseudoscience Support in the American General Public, 1979-2001

A study using national survey data over twenty-three years and examining four pseudoscience topics untangles some seeming conundrums about the relationship between education levels and belief in pseudoscience. It identifies more precisely just which aspects of education influence pseudoscience beliefs.

SKEPTICAL INQUIRER readers need no introduction to the perils of pseudoscience. Here, we define pseudoscience beliefs as cognitions about material phenomena that claim to be “science,” yet use nonscientific evidentiary processes. Rather than control groups or hypothesis testing, pseudoscience practitioners employ authoritative assertion (such as scripture), anecdote (ersatz “cures”), compelling stories (alien abduction), or unelaborated “natural” causes (planet positions). Although following one’s horoscope or using lucky numbers to choose a lottery ticket can be fun, pseudoscience is also rife with ineffective or hazardous untested “cures,” costly psychic hotlines, or a fatalistic reliance on luck. Thus, factors promoting support or rejection of pseudoscience have received serious study.

Of these, education has been studied the most. A major expected consequence of formal education, especially college, is intellectual sophistication, both in factual knowledge and skillful evaluation of information. Yet pseudoscience belief relates to level of formal education or degree attainment (the most typical uses of the term “education”) in complex and inconsistent ways. Ray Eve and his colleagues (1995) found that New Age devotees can be quite well educated. When he reviewed national surveys, Erich Goode (2002) found that traditional ersatz science beliefs, especially those about Biblical creationism, declined with formal education, although other beliefs, such as time travel or alien visitation, did not consistently do so. Even science knowledge among undergraduates poorly predicted “more modern” pseudoscience beliefs.

We untangle some of these conundrums by refining what it is that educational level does to influence pseudoscience acceptance. Using more than two decades of representative national surveys of American adults, we examine four different pseudoscience topics and several educational dimensions. In the process, we identify more precisely just which aspects of education affect pseudoscience belief.

The Trouble with “Educational Level”

Confusion about how education affects pseudoscience belief occurs partly because the concept level of education consists of more dimensions than knowledge or skill attainment, the typical foci in studies of science literacy and pseudoscience support. When everyone studied is a college undergraduate, of course, educational level hardly varies. Among non-student adults, educational level may represent social class or relate to other factors such as age or gender, which could also influence acceptance of pseudoscience. Adults with higher levels of education have more exposure to science courses or may be more favorable toward science.

Pseudoscience belief may be affected by these related factors as well as by pedagogical experiences. Goode (2002), for example, identifies traditional religiosity as influencing susceptibility to Biblical creationism appeals; most American researchers also report finding that women or those less educated are more traditionally religious. Partly because those born after World War II are more likely to have attended college than those born earlier, and partly because different generations have unique experiences, in any one year, age could predict pseudoscience belief. Adults maturing in the 1960s became familiar with space exploration; teens in the 1990s saw animal cloning and recombinant DNA become reality. Age intertwines with generation, and so time itself should be considered. Moreover, America’s educational average rose over time, science may be taught differently now, due to studies such as the Third International Mathematics and Science Studies (TIMSS) or endeavors such as the American Association for the Advancement of Science’s Project 2061, and specific ersatz science beliefs may rise or fall in popularity. Thus, the effects of educational level on pseudoscience belief may decrease when factors such as time, age, and gender are controlled.

Second, something directly related to educational experience could be important. Individuals who have elected more science courses may more easily distinguish between science and pseudoscience. College science majors may distinguish more rigorously than those trained in other fields. Finally, intellectual “products” of education, such as basic knowledge, may affect pseudoscience support, especially for “traditional” ersatz science such as astrology. Traditional religiosity generally declines with education level, while appreciation of science rises.

It is important to disentangle what it is about education that affects these beliefs. Are educational level effects simply statistical artifacts that occur because educational correlates independently influence belief? If so, we should pay more attention to the factors that act as a bulwark against a “demonhaunted world.” How influential are aspects of the educational experience, such as major field of study? And what roles do intellectual products of education, such as attitudes, play?

The Surveys of Public Attitudes Toward Science and Technology

To address issues about education level and pseudoscience support, we turned to the National Science Foundation Surveys of Public Understanding of Science and Technology 1979-2001 (Miller and Kimmel 1999), an archive of probability sample surveys of the American general public. This series comprises 21,965 interviews with adults at least age eighteen. Respondents for the year 1979 were interviewed in person, those thereafter in random-digit dial telephone surveys.

Figure 1. Pseudoscience beliefs over time.

The surveys assess science and technology interest, knowledge and acceptance of science, and science-related activities. Although specific items vary across time, collectively they include more material about pseudoscience belief than any other national survey. The archive also contains a wealth of educational detail: not only level, but major field and exposure to science courses. Unlike Goode, unfortunately, we lack direct questions about an individual’s religion. We do use an item tapping traditional religiosity in which respondents indicated how much they agreed that “We depend too much on science and not enough on faith.”

Our Basic Science Knowledge Score sums ten items asked from 1988 to 2001, the longest time series in the archive. The items address “basic topics” that students encounter in grade school and review during middle school (Earths center is hot; oxygen comes from plants; smoking causes lung cancer; electrons are smaller than atoms; continental drift; lasers focus sound waves; antibiotics kill viruses and bacteria; humans and dinosaurs coexisted; which travels faster, light or sound; whether Earth goes around the Sun or vice- versa). Counting each correct answer as “1,” total scores range from 0 to 10.

The Science Beneficial Index averages up to three items with a score ranging from one (negative) to five (positive). The first item assesses whether an individual believes the benefits of science outweigh the risks. The other two assess agreement with the statements: science “makes life healthier and easier” and “science makes our way of life change too fast.”

We use four pseudoscience items with the longest time series. One item asked whether “human beings, as we know them today, developed from earlier species of animals.” An Astrology Index combined how often one reads a horoscope (daily, quite often, occasionally, or almost never) with their perception of astrology as very, sort of, or not at all scientific. A third item asked how much the person agreed or disagreed that “some numbers are especially lucky for some people.” Finally, a true-false item read: “Some of the unidentified flying objects that have been reported are really space vehicles from other civilizations.”

With the exception of the modest correlation between the astrology and lucky numbers questions (r=0.31), the answers to these items were generally unrelated. The low across-item correlations suggest that the astrology and lucky numbers areas may tap traditional superstitions, but in general these four topical areas are distinct pseudoscience domains.

Pseudoscience Acceptance Over Time

Pseudoscience beliefs are popular among Americans. Over the total 1979 to 2001 period, 37 percent of the general public said that astrology is “very” or “sort of” scientific and nearly half admitted reading their horoscope at least occasionally (one in six “quite often” or daily). One third accepted lucky numbers and 30 percent felt that some UFOs were alien spacecraft. About half rejected the evolution statement.

The 2001 figures were comparable. Thirty percent felt astrology was at least “sort of” scientific, 44 percent read their horoscope at least occasionally (14 percent frequently), 28 percent agreed “some numbers are lucky”and 29 percent agreed with the UFO item. Forty seven percent rejected the evolution item. Sixty percent of Americans also agreed that “Some people possess psychic powers or ESP”; 89 percent agreed that “There are some good ways of treating sickness that medical science does not recognize”; and half said magnetic therapy was “sort of” or “very” scientific.

In figure 1, we show how these beliefs changed over time. We consider overall trends more significant than annual fluctuations, which, in any one year, could reflect sampling or other methodological vagaries. Astrology, lucky numbers, or UFO-logy acceptance decreased over time, particularly in the late 1980s. This decline agrees with other research from the mid to late 1980s, such as the General Social Surveys, showing that personal reports of paranormal experiences among the general public fell. In early 1988, Donald Regan described how First Lady Nancy Reagan consulted an astrologer, and imparted advice she received to the President. The White House consequently bore the brunt of extensive media ridicule. We suspect that for some time after that, members of the general public were more circumspect about agreeing with items that smacked of superstition. In contrast, support for evolution, barring two upward “blips,” remained level.

Figure 2. How degree level affects support for evolution (before and after adjusting for other predictor variables).

The apparent time changes in figure 1 help illustrate why multiple controls are important. Between 1979 and 2001, educational level itself increased; any apparent effects of time could reflect events such as the Regan expose-or be due instead to rising educational levels. We also found that older people were more negative, more often rejecting lucky numbers, astrology, space aliens, and evolution. Women were more positive toward astrology and more often rejected evolution (there were no sex differences on the lucky numbers or space aliens item).

Educational Level and Support for Evolution

Pseudoscience beliefs are unevenly distributed in the American adult population. Since the Statistical Abstracts reports that one- third of adults had at least one year of college in 2000, we used the following four basic educational levels: a high school degree or less; a two-year college degree; a baccalaureate; and an advanced college degree. With no further controls, initially all four pseudoscience beliefs declined with higher degree levels. High school graduates were 50 percent more favorable toward astrology than graduate degree recipients. Seventy percent of advanced degree holders supported evolution, but only 41 percent of the high school educated did so. Forty percent of the high school educated endorsed lucky numbers, compared with 18 percent holding advanced degrees. Finally, one third of the high school educated subscribed to UFO- logy compared with less than one-quarter of advanced degree holders.

We used a procedure called Multiple Classification Analysis (MCA) to adjust how educational level affected pseudoscience support, controlling for gender, age, survey year, the number of science courses, elementary science knowledge, favorability toward science, and endorsement of “faith over science.” MCA allows us to examine the “net effects” of educational level, taking all the other factors into account. What typically happens is that differences by degree level become much smaller once other predictors are controlled than they originally were, when the controls for other predictors were ignored.

Figure 2 presents a “before and after” example using the evolution item. Initially, those with at least a baccalaureate were over 20 percent more likely to agree with this item than high school or junior college respondents. However, educational level differences shrank by half as soon as we considered correlates of education (e.g., age), aspects of the educational experience (e.g., exposure to science classes), and consequences of education (e.g., basic science knowledge). Adjusted evolution support increased among the high school educated, and decreased among college graduates. Over half of the original or “gross” difference by educational level in support for evolution was due, in fact, to factors other than educational level.

Similarly, when educational correlates, direct science exposure, or science knowledge and attitudes were also considered, the effects of educational level on astrology support dropped by two-thirds and those on lucky numbers dropped by three-quarters. Even the UFO-logy educational level differences dropped from 10 to seven percent after controls were instituted. We think it safe to say that, on the average, at least half of what are typically considered “educational level” differences in pseudoscience belief is due to variables other than simple educational exposure.

What’s Your Major?

How educational level affects different beliefs or attitudes is often explained by allusions to the college experience. But there are many aspects to the “college experience.” Colleges can be public or private, large or small, secular or sectarian, research universities, vocational schools, or liberal arts colleges. College major and exposure to science classes are the most straightforward dimensions of the college experience that should relate to pseudoscience belief.

Figure 3. Astrology index and percent supporting or accepting evolution, lucky numbers, and UFO aliens by educational level (after adjustments for controls).

However, we found major field only weakly influenced pseudoscience beliefs. Further, any initial effects of a specific major (e.g., life or physical science majors knew more basic facts, and accepted UFO-logy less, while business majors knew somewhat less, rejected evolution more, and supported astrology or alien spacecraft slightly more) vanished when we controlled for how many science courses an individual had taken.

On the other hand, the effects of electing science courses withstood controls for age, time, gender, religiosity, degree level, college major, general basic knowledge, and positive attitudes toward science. Adults with more science training more often rejected astrology or lucky numbers and more often accepted evolution. However, the number of science courses did not have net effects on the UFO item.

Intellectual “Products” of Formal Education

Finally, we considered how basic science knowledge and attitudes toward science, including the religiosity item, related to pseudoscience belief. Knowledge and attitudes had important net effects upon pseudoscience beliefs-although not always in expected ways. Those with higher science knowledge scores more often rejected astrology or lucky numbers and more often supported evolution. But they also slightly more often endorsed a UFO-alien connection. Individuals who were more positive about science were more positive toward evolution, and more often rejected astrology, lucky numbers, and UFO-logy.

Individuals who agreed with the “on faith” item were substantially more negative toward evolution. The net effects of this item on evolution rejection were about as strong as all the educational variables combined. It had the strongest net effect of any predictor we considered on evolution support. However, effects of the “on faith” item were specific to evolution; they did not increase our understanding of any other pseudoscience belief.

Educational Level: “Net Effects”

The net effects of degree level, controlling study year, age, gender, number of science courses, college major, basic science knowledge, and attitudes, are shown in figure 3. Even considering these factors, degree level continued to predict rejecting superstition or sci-fi fantasy, and accepting evolution. However, as noted earlier, the effects of degree level dropped by half to three- quarters when refinements in educational level measurement were made.

These results tell us it is essential to consider other factors such as age, gender, time, science coursework, and cognitions in assessing how education relates to pseudoscience belief. Otherwise, educational level effects are inaccurately inflated. There are additional factors we did not measure, but which also probably relate to both degree level and pseudoscience beliefs, most notably media exposure and type. An individual’s choices of these channels of “informal education” are no doubt influenced by the amount and type of education they receive. However, once selected, media exert influences of their own, whether through reruns of The X-Files or CNN.

Discussion

Many Americans accept pseudoscience beliefs. On the average and to some degree, one-third support astrology, lucky numbers, or space aliens, and nearly half reject evolution. Had the evolution item more explicitly addressed random selection, we suspect rejection would have been higher. Both scholars and laypersons believe more education will discourage at least some pseudoscience beliefs. To some extent, this is true.

As we systematically refined “educational experiences,” we found that those concerned about pseudoscience support must avoid simplistic, poorly defined notions of “college exposure.” For example, the particular ersatz science must be considered. Regardless of education, astrology support fell over time, perhaps because of the “Nancy Reagan effect,” when the media discovered the former First Lady consulted astrology reports in attempts to influence American policy. Some effects attributed to “educational level” almost certainly occur because the college educated have taken more science courses. However, even considering these other factors, as degree level rises, so, too, does pseudoscience rejection, although the net influence of degree level drops dramatically when measures of education are refined. Confounded factors, such as age or knowledge, must be controlled in any assessment of educational affects on pseudoscience beli\ef.

We also cannot discount self-selection factors. Youth who value science or reject superstition disproportionately may attend college or choose science majors. Although educators cannot control who continues their schooling, they can influence how and what pupils are taught.

Some scholars, such as Morris Sharnos (1995), declare that even well-educated citizens cannot understand scientific terms and constructs at a level sufficient to read a daily paper or magazine and to understand the competing arguments on disputes or controversies (definition from Jon D. Miller, 2000: 24). Instead Shamos suggests that education should instill respect for “experts,” i.e., scientists. We suspect his assertion may be partly true, but for subtle reasons not typically considered in civic science literacy. Each year, science becomes more complex and technology more sophisticated. Comprehending nuances outside of one’s field can be difficult. Ironically, we suspect that scientific or technological progress actually contributes to pseudoscience acceptance. Consider the Victorian era: mediums and clairvoyants flourished among British and American upper and middle class adults- generally the most educated of their day. Many academics and laypersons supported Social Darwinism in the early twentieth century.

Were these Victorian beliefs really so farfetched, juxtaposed against the technological or scientific advances at the time, such as vaccination; the telegraph; telephones; automobiles; or early airplanes? Simultaneously, Darwinian theory threatened theological positions of humanity as “little lower than the angels” and Jules Verne wrote vividly of submarines and rocket ships. Compared with a steady stream of scientific and technical marvels and imaginative stories, the line between science and fiction in the eyes of the general public can begin to blur.

Our own era has seen space travel, in vitro fertilization, cloning, and the Internet. Next to these feats, alien visitation may seem more credible. Only more meticulous knowledge of the scientific and technological processes that make “speaking over wires” or air travel possible may allow adults to distinguish between what is viable and what is not. Thus, even among members of the general public with high levels of education, science discovery or applications may open the doors to pseudoscience speculation. We propose that the higher the societal level of scientific and technological achievement and the more seemingly miraculous the attainments, the greater the onus on our educational system to help produce citizens who can tell the difference between fact and fancy.

Although members of the general public cannot always follow a particular scientific argument or fully utilize a technology, many argue the public still should participate in scientific and technological development, sometimes in ethics, and other times in providing a citizen’s point of view. This perspective underscores the urgency of educators to help students learn to confront purveyors of pseudoscience. How can effectiveness in combating pseudoscience be increased? Some argue that primary and secondary schools must focus more on process than factual memorization, so that pupils better learn how to tell false science from true. We believe that when students learn effective ways to assess information, they are better prepared later in life when understanding information more informally through media. Some scholars (Goode 2002, Martin 1994) suggest discussing pseudoscience topics during science classes. By talking about why people believe in ghosts or ESP, students can learn how scientific processes and evidence differ from those of pseudoscience. Preemptive arguments against pseudoscience assertions then can provide an inoculation process, one sorely needed to prepare enlightened citizens to participate in modern society.

In Great Falls, Montana, Libertarian Senate candidate Stan Jones turned blue from drinking a silver solution that he believed would protect him from disease. In 1999, Jones, a 63-year-old business consultant and part-time college instructor, began drinking colloidal silver because he feared that millennium disruptions might create an antibiotics shortage.

Adapted from the Associated Press, October 2, 2002: “Candidate’s Skin Blue After Drink!”

Even among members of the general public with high levels of education, science discovery or applications may open the doors to pseudoscience speculation.

Note

This research was supported in part by grants from the National Science Foundation (#0139458) and the Association for Institutional Research (#03-212-SRS-0086139) awarded to Susan Carol Losh. The data were made available through site license from the NSF, and were collected under the direction of Dr. Jon D. Miller (1979-1999) and ORC Macro (2001). We want to thank Jon D. Miller, Linda Kimmel, Tom Duffy, and Seth Muzzy for providing information about the data, and Raymond Eve, Douglas Lee Eckberg, Mary Frank Fox, Erich B. Goode, Dan Kimel, Melissa Pollak, Terry Russell, Alice Robbin, and Justin Watson for their insights. Of course, the responsibility for any errors or misinterpretation of the results is our own.

References

Goode, Erich. 2002. Education, scientific knowledge, and belief in the paranormal. SKEPTICAL INQUIRER 26(1): 24-27.

Martin, Michael. 1994. Pseudoscience, the paranormal, and science education. Science and Education 3: 357-371.

Miller, Jon D. 2000. The development of civic scientific literacy in the United States. In Kumar, D.D. and D.E. Chubin (eds.) Science, Technology, and Society: A Sourcebook on Research and Practice. New York: Kluwer Academic/Plenum Publishers.

Miller, Jon D., and Linda Kimmel. 1999. The United States Science & Engineering Indicators Studies CD-ROM: User’s Manual. Chicago: Chicago Academy of Sciences.

Shamos, Morris H. 1995. The Myth of Scientific Literacy. New Brunswick, New Jersey: Rutgers University Press.

Taylor, John, Raymond A. Eve, and Francis B. Harrold. 1995. Why creationists don’t go to psychic fairs. SKEPTICAL INQUIRER 19(6): 23- 28.

Susan Carol Losh is associate professor of Educational Psychology and Learning Systems at Florida State University and an American Statistical Association National Science Foundation Research Fellow for 2003-2004. Christopher M. Tavani, Rose Njoroge, and Michael McAuley are doctoral candidates and Ryan Wilke is a master’s candidate in the Department of Educational Psychology and Learning Systems. For further information, please contact Susan Carol Losh: [email protected], 850-644-8778; Fax 850-644-8776.

Copyright The Committee for the Scientific Investigation of Claims of the Paranormal (SCICOP) Sep/Oct 2003

Pioneer-Standard Electronics, Inc. Is Now Agilysys, Inc.

CLEVELAND, Sept. 15 /PRNewswire-FirstCall/ — Pioneer-Standard Electronics, Inc. , a leading provider of enterprise computer solutions, today announced that shareholders approved an amendment to the Company’s Amended Articles of Incorporation, thus changing the Company’s name to Agilysys, Inc. This change is effective immediately. The Company also unveiled its new logo.

“Adopting this new name, along with a new visual signature, differentiates our Company and our core capabilities, within the industry,” said Arthur Rhein, chairman, president and chief executive officer. “We have redefined our identity. The name Agilysys clearly represents us as the technology solutions company that delivers the tools, knowledge and value that enable our partners and customers to perform at their best.”

Agilysys is a combination of the words “agile,” as in a company that can move and adapt quickly in a fast-paced, dynamic environment and “systems,” as in enterprise computer systems, the core of our business. The name is designed to quickly bring to mind the core strengths Agilysys deploys. Adopting the name Agilysys further illustrates the Company’s intention to be the most innovative provider of enterprise computer solutions in the eyes of its partners, customers, suppliers and employees.

The new name was approved by shareholders in a special meeting held Friday, September 12, 2003. In conjunction with this name change, the Company has reserved and will use the ticker symbol “AGYS,” which will be more readily identified with the new name. This new ticker symbol is expected to be effective as of Tuesday, September 16, 2003.

About Agilysys, Inc.

Agilysys, Inc. is one of the foremost distributors and premier resellers of enterprise computer technology solutions from HP, IBM and Oracle, as well as other leading manufacturers. The Company has a proven track record of delivering complex servers, software, storage and services to resellers and corporate customers across a diverse set of industries. Headquartered in Cleveland, Ohio, Agilysys has sales offices throughout the U.S. and Canada. For more information, visit the Company’s website at http://www.agilysys.com/ .

Pioneer-Standard Electronics, Inc.

CONTACT: Jerri Hegwood, Vice President, Marketing Communications of
Agilysys, Inc., +1-770-625-7558

Web site: http://www.agilysys.com/

.content>

WORLD SET TO END AT 10pm ON MAY19, 2031 ; New asteroid will wipe us out

AN asteroid hurtling towards the Earth could wipe out all human life within 30 years, The People can reveal.

The 1.5 mile-wide lump of space rock is travelling at 21,000 mph – six miles a second.

And it is set to cross Earth’s orbit at 10pm on May 19, 2031. Science minister David Sainsbury stunned the House of Lords with news of the devastating discovery by astronomers.

Asteroid expert Kevin Yates warned: “If it hit us it would vaporise at least a continent. The climate change would cause a nuclear winter which would potentially mean the extinction of the human race.”

Lord Sainsbury set out to allay fears, saying there is no danger of another recently-discovered asteroid hitting Earth in 2014. But he confirmed that a SECOND and more threatening rock is on the way. Scientists calculated its 2031 arrival date in the past 48 hours.

Lib Dem space spokesman Lembit Opik said:”If it landed on Moscow it would incinerate everything from Bognor to the Bosphorous.

“And if it came down in the sea it would set up a tidal wave 17 miles high.”

Mr Yates, of the asteroid monitoring system at Britain’s National Space Centre, believes the risk of impact is slight. But even if it misses us it will only do so by ten hours, a blink in cosmic terms.

The new asteroid, dubbed 2003/Q0104, was first spotted on August 31 by the Minor Planets Centre in Cambridge, Massachusetts. It is being tracked by the Jet Propulsion Laboratory in Pasadena, California.

Lord Tanlaw, chairman of the Parliamentary Astronomy and Space Environment Group, said: “It is one of nature’s missiles of mass destruction.”

Lord Sainsbury promised a scientific probe and admitted: “There is clearly a risk.”

[email protected]

Ancient Black Hole Speeds Through Milky Way

Ancient black hole speeds through Sun’s galactic neighborhood, devouring a nearby companion star

Hubble Space Telescope — Data from the Space Telescope Science Institute’s Digitized Sky Survey has played an important supporting role in helping radio and X-ray astronomers discover an ancient black hole speeding through the Sun’s galactic neighborhood.

The rogue black hole is devouring a small companion star as the pair travels in an eccentric orbit looping to the outer reaches of our Milky Way galaxy.

The scientists believe the black hole is the remnant of a massive star that lived out its brief life billions of years ago and later was gravitationally kicked from its home star cluster to wander the Galaxy with its companion.

The discovery was made with observations from the National Science Foundation’s Very Long Baseline Array (VLBA) radio telescope and the Rossi X-ray satellite.

Important supporting evidence came from studying optical images made for the Palomar Observatory Sky Survey (POSS) taken 43 years apart. The POSS images were digitized by the Space Telescope Science Institute to support the Hubble Space Telescope observing programs and also as a service to the astronomical community.

This huge database, called the Digitized Sky Survey, allows astronomers to quickly and easily measure stellar motion across the sky. The DSS scans confirmed the motion of the black hole and companion star. The DSS scans, combined with data from both the radio and optical images, allowed the astronomers to calculate the object’s orbital path around the galactic center.

“This discovery is the first step toward filling in a missing chapter in the history of our Galaxy,” said Felix Mirabel, an astrophysicist at the Institute for Astronomy and Space Physics of Argentina and the French Atomic Energy Commission. “We believe that hundreds of thousands of very massive stars formed early in the history of our Galaxy, but this is the first black hole remnant of one of those huge primeval stars that we’ve found.”

“This also is the first time that a black hole’s motion through space has been measured,” Mirabel added. A black hole is a dense concentration of mass with a gravitational pull so strong that not even light can escape it. The research is reported in the September 13, 2001 issue of the scientific journal Nature.

The object is called XTE J1118+480 and was discovered by the Rossi X- Ray satellite on March 29, 2000. Later observations with optical and radio telescopes showed that it is about 6,000 light-years from Earth, and it is a “microquasar” in which material sucked by the black hole from its companion star forms a hot, spinning disk that spits out “jets” of subatomic particles that emit radio waves.

Most of the stars in our Milky Way galaxy are within a thin disk, called the plane of the Galaxy. However, there also are globular clusters, each containing hundreds of thousands of the oldest stars in the Galaxy, which orbit the Galaxy’s center in paths that take them far from the Galaxy’s plane. XTE J1118+480 orbits the Galaxy’s center in a path similar to those of the globular clusters, moving at 300,000 miles per hour (145 kilometers per second) relative to the Earth.

How did it get into such an orbit? “There are two possibilities: either it formed in the Galaxy’s plane and was somehow kicked out of the plane, or it formed in a globular cluster and was kicked out of the cluster,” said Vivek Dhawan, an astronomer at the National Radio Astronomy Observatory (NRAO) in Socorro, New Mexico.

A massive star ends its life by exploding as a supernova, leaving either a neutron star or a black hole as a remnant. Some neutron stars show rapid motion, thought to result from a sideways “kick” during the supernova explosion. “This black hole has much more mass – about seven times the mass of our Sun – than any neutron star,” said Dhawan. “To accelerate it to its present speed would require a kick from the supernova that we consider improbable,” Dhawan added.

“We think it’s more likely that it was gravitationally ejected from the globular cluster,” Dhawan said. Simulations of the gravitational interactions in globular clusters have shown that the black holes resulting from the collapse of the most massive stars should eventually be ejected from the cluster.

“The star that preceded this black hole probably formed in a globular cluster even before our Galaxy’s disk was formed,” Mirabel said. “What we’re doing here is the astronomical equivalent of archaeology, seeing traces of the intense burst of star formation that took place during an early stage of our Galaxy’s development.”

The black hole has consumed so much of its companion star that the inner layers of the smaller star – only about one-third the mass of the Sun – now are exposed. The scientists believe the black hole captured the companion before being ejected from the globular cluster, as if it were grabbing a snack for the road.

“Because this microquasar happened to be relatively close to the Earth, we were able to track its motion with the VLBA even though it’s normally faint,” said Mirabel. “Now, we want to find more of these ancient black holes. There must be hundreds of thousands swirling around in our Galaxy.”

The astronomers used the VLBA to observe XTE J1118+480 in May and July of 2000, using the VLBA’s great resolving power, or ability to see fine detail, to precisely measure the object’s movement against the backdrop of more-distant celestial bodies.

“With the VLBA, we could start observing soon after this object was discovered and get extremely precise information on its position. Then, we were able to use the digitized data from the Palomar surveys to extend backward the time span of our information. This is a great example of applying multiple tools of modern astronomy – telescopes covering different wavelengths and digital databases – to a single problem,” said Dhawan.

In addition to Mirabel and Dhawan, the research was performed by Roberto Mignani of the European Southern Observatory; Irapuan Rodrigues, who is a fellow of the Brazilian National Research Council at the French Atomic Energy Commission; and Fabrizia Guglielmetti of the Space Telescope Science Institute in Baltimore, MD.

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

Release Date: 2:00PM (EDT) September 12, 2001

Release Number: STScI-2001-29

On the Net:

Hubble Space Telescope

NASA

More science, space, and technology from RedNova

Audacious & Outrageous: Space Elevators

Inspired partly by science fiction, NASA scientists are seriously considering space elevators as a mass-transit system for the next century.

Science@NASA — “Yes, ladies and gentlemen, welcome aboard NASA’s Millennium-Two Space Elevator. Your first stop will be the Lunar-level platform before we continue on to the New Frontier Space Colony development.

The entire ride will take about 5 hours, so sit back and enjoy the trip. As we rise, be sure to watch outside the window as the curvature of the Earth becomes visible and the sky changes from deep blue to black, truly one of the most breathtaking views you will ever see!”

Does this sound like the Sci-Fi Channel or a chapter out of Arthur C. Clarke’s, Fountains of Paradise? Well, it’s not. It is a real possibility — a “space elevator” — that researchers are considering today as a far-out space transportation system for the next century.

Above: Artist Pat Rawling’s concept of a space elevator viewed from the geostationary transfer station looking down along the length of the elevator toward Earth. [more information]

David Smitherman of NASA/Marshall’s Advanced Projects Office has compiled plans for such an elevator that could turn science fiction into reality. His publication, Space Elevators: An Advanced Earth-Space Infrastructure for the New Millennium, is based on findings from a space infrastructure conference held at the Marshall Space Flight Center last year. The workshop included scientists and engineers from government and industry representing various fields such as structures, space tethers, materials, and Earth/space environments.

“This is no longer science fiction,” said Smitherman. “We came out of the workshop saying, ‘We may very well be able to do this.'”

A space elevator is essentially a long cable extending from our planet’s surface into space with its center of mass at geostationary Earth orbit (GEO), 35,786 km in altitude. Electromagnetic vehicles traveling along the cable could serve as a mass transportation system for moving people, payloads, and power between Earth and space.

Current plans call for a base tower approximately 50 km tall — the cable would be tethered to the top. To keep the cable structure from tumbling to Earth, it would be attached to a large counterbalance mass beyond geostationary orbit, perhaps an asteroid moved into place for that purpose.

“The system requires the center of mass be in geostationary orbit,” said Smitherman. “The cable is basically in orbit around the Earth.”

Four to six “elevator tracks” would extend up the sides of the tower and cable structure going to platforms at different levels. These tracks would allow electromagnetic vehicles to travel at speeds reaching thousands of kilometers-per-hour.

Conceptual designs place the tower construction at an equatorial site. The extreme height of the lower tower section makes it vulnerable to high winds. An equatorial location is ideal for a tower of such enormous height because the area is practically devoid of hurricanes and tornadoes and it aligns properly with geostationary orbits (which are directly overhead).

According to Smitherman, construction is not feasible today but it could be toward the end of the 21st century. “First we’ll develop the technology,” said Smitherman. “In 50 years or so, we’ll be there. Then, if the need is there, we’ll be able to do this. That’s the gist of the report.”

Smitherman’s paper credits Arthur C. Clarke with introducing the concept to a broader audience. In his 1978 novel, Fountains of Paradise, engineers construct a space elevator on top of a mountain peak in the mythical island of Taprobane (closely based on Sri Lanka, the country where Clarke now resides). The builders use advanced materials such as the carbon nanofibers now in laboratory study.

“His book brought the idea to the general public through the science fiction community,” said Smitherman. But Clarke wasn’t the first.

As early as 1895, a Russian scientist named Konstantin Tsiolkovsky suggested a fanciful “Celestial Castle” in geosynchronous Earth orbit attached to a tower on the ground, not unlike Paris’s Eiffel tower. Another Russian, a Leningrad engineer by the name of Yuri Artsutanov, wrote some of the first modern ideas about space elevators in 1960.

Published as a non-technical story in Pravda, his story never caught the attention of the West. Science magazine ran a short article in 1966 by John Isaacs, an American oceanographer, about a pair of whisker-thin wires extending to a geostationary satellite.

The article ran basically unnoticed. The concept finally came to the attention of the space flight engineering community through a technical paper written in 1975 by Jerome Pearson of the Air Force Research Laboratory. This paper was the inspiration for Clarke’s novel.

Pearson, who participated in the 1999 workshop, envisions the space elevator as a cost-cutting device for NASA. “One of the fundamental problems we face right now is that it’s so unbelievably expensive to get things into orbit,” said Pearson. “The space elevator may be the answer.”

The workshop’s findings determined the energy required to move a payload by space elevator from the ground to geostationary orbit could remain relatively low. Using today’s energy costs, researchers figured a 12,000-kg Space Shuttle payload would cost no more than $17,700 for an elevator trip to GEO. A passenger with baggage at 150 kg might cost only $222! “Compare that to today’s cost of around $10,000 per pound ($22,000 per kg),” said Smitherman. “Potentially, we’re talking about just a few dollars per kg with the elevator.”

During the workshop, issues pertinent to transforming the concept from science fiction to reality were discussed in detail. “What the workshop found was there are real materials in laboratories today that may be strong enough to construct this type of system,” said Smitherman.

Smitherman listed five primary technology thrusts as critical to the development of the elevator.

First was the development of high-strength materials for both the cables (tethers) and the tower.

In a 1998 report, NASA applications of molecular nanotechnology, researchers noted that “maximum stress [on a space elevator cable] is at geosynchronous altitude so the cable must be thickest there and taper exponentially as it approaches Earth. Any potential material may be characterized by the taper factor — the ratio between the cable’s radius at geosynchronous altitude and at the Earth’s surface. For steel the taper factor is tens of thousands — clearly impossible. For diamond, the taper factor is 21.9 including a safety factor. Diamond is, however, brittle. Carbon nanotubes have a strength in tension similar to diamond, but bundles of these nanometer-scale radius tubes shouldn’t propagate cracks nearly as well as the diamond tetrahedral lattice.”

Fiber materials such as graphite, alumina, and quartz have exhibited tensile strengths greater than 20 GPa (Giga-Pascals, a unit of measurement for tensile strength) during laboratory testing for cable tethers. The desired strength for the space elevator is about 62 GPa. Carbon nanotubes have exceeded all other materials and appear to have a theoretical strength far above the desired range for space elevator structures. “The development of carbon nanotubes shows real promise,” said Smitherman. “They’re lightweight materials that are 100 times stronger than steel.”

The second technology thrust was the continuation of tether technology development to gain experience in the deployment and control of such long structures in space.

Third was the introduction of lightweight, composite structural materials to the general construction industry for the development of taller towers and buildings. “Buildings and towers can be constructed many kilometers high today using conventional construction materials and methods,” said Smitherman. “There simply has not been a demonstrated need to do this that justifies the expense.” Better materials may reduce the costs and make larger structures economical.

Fourth was the development of high-speed, electromagnetic propulsion for mass-transportation systems, launch systems, launch assist systems and high-velocity launch rails. These are, basically, higher speed versions of the trams now used at airports to carry passengers between terminals. They would float above the track, propelled by magnets, using no moving parts. This feature would allow the space elevator to attain high vehicle speeds without the wear and tear that wheeled vehicles would put on the structure.

Fifth was the development of transportation, utility and facility infrastructures to support space construction and industrial development from Earth out to GEO. The high cost of constructing a space elevator can only be justified by high usage, by both passengers and payload, tourists and space dwellers.

During a speech he once gave, someone in the audience asked Arthur C. Clarke when the space elevator would become a reality.

“Clarke answered, ‘Probably about 50 years after everybody quits laughing,'” related Pearson. “He’s got a point. Once you stop dismissing something as unattainable, then you start working on its development. This is exciting!”

—–

Author: Steve Price

—–

On the Net:

Space Towers information

Space Elevator Concept

NASA

More science, space, and technology from RedNova

Star-gazing tech titans put money where dreams are

Adeo Ressi and Elon Musk drove the Long Island Expressway in late 2000, trying to figure out what to do next in life.

The tech bubble had burst. Ressi was stepping down as CEO of a struggling Internet firm.

Musk, co-founder of online payment firm PayPal, planned to hand the company to someone more experienced.

The friends — former college roommates — looked into the darkness of the night. ”There was a moment of silence,” Ressi says. ”I don’t remember who said it, but someone said, ‘Space.’ ” They both laughed, then discounted the idea.

Three years later, Musk is building a rocket to carry cargo. Ressi is helping create what he hopes will be the first spaceship for tourists. If all goes well, both could blast off next year.

They might be joined by other tech-savvy entrepreneurs who — at an unusual rate — are hurling themselves into space ventures.

Amazon.com CEO Jeff Bezos is backing a space venture, as is John Carmack, the man behind the best-selling video games Doom and Quake. Billionaire Microsoft co-founder Paul Allen is believed to be funding a rocket company, industry insiders say. Eric Klien, CEO of Web-hosting firm Colossus, is raising funds to build a ”space ark” to protect the human race should Earth become uninhabitable.

Many of these spaced-out entrepreneurs say they’re living a childhood dream. And why not? They’ve got the money, thanks to huge wins during the tech boom. They hope to go down in history as extraterrestrial pioneers — much more exciting than, say, e-commerce pioneers. And they could even make money. Getting people and cargo into space cheaply is an untapped market.

”The first trillionaires will be made there,” says Peter Diamandis, creator of the X Prize, a $10 million purse extended in 1995 for the first private-sector team to build a rocket for tourists.

Skeptics abound. Henry Hertzfeld, senior scientist with the Space Policy Institute, calls private space flights a ”rich man’s hobby.” Neta Bahcall, astrophysics professor at Princeton University, calls launching ”a very difficult and expensive endeavor. . . . To get out of the strong gravity of Earth, you need a very powerful rocket — essentially, a bomb.”

Even simple rockets that launch satellites today cost about $30 million. Danger also abounds. About 400 people have made it to space, but about 20 have died in the process. The explosion of space shuttle Columbia this year shows how something can go wrong, even with NASA’s resources.

Regulatory hurdles, too, could tie up flights for years, says Hertzfeld. The Federal Aviation Administration is in charge of licensing U.S.-based space flights. Some X Prize competitors have started seeking licenses, but getting one could take years, Hertzfeld says. Rocket entrepreneurs testified before Congress this summer in hopes of clarifying and loosening restrictions against rocket launches.

Finally, money is short. Most of the space buffs don’t even seek venture capital, which is typically tapped for new tech ventures. Space ”is too risky,” says Brian Chase, executive director of the National Space Society.

Even so, the entrepreneurs press ahead, as have other CEO-types.

Howard Hughes, founder of Hughes Aircraft, in 1938 flew one of the first round-the-world trips, a journey that took three days, 19 hours and 17 minutes. Today, Oracle CEO Larry Ellison pilots yachts in the America’s Cup and other races. He was almost killed, at least once. In 1997, Virgin Group CEO Richard Branson, in an attempt to fly the first balloon non-stop around the world, thudded to the ground at 25 miles an hour in an Algerian desert, short of his goal. He also failed in an attempt to cross the Atlantic in a speedboat.

But tech types and space have a special connection, observers say. Tech is about the future, discovery, breaking barriers. So is space. Technologists push the edge. Space has no edge. Even oceans have floors. The entrepreneurs matured post-Star Trek amid the rapidly enveloping electronic age. They got rich on a new idea — the Internet — and grew up ”devouring science-fiction novels under their bedsheets,” says Tony Perkins, editor of tech Web site AlwaysOn.

Space ”is the next big leap,” the final frontier, says longtime Silicon Valley marketer Fred Hoar, now professor at Santa Clara University. The drive to conquer space stems from the Silicon Valley . . . belief that progress continues unabated,” he says.

Struck by stars, fame, riches

Besides, some of the entrepreneurs say, they’re obligated. Leaders in business should be leaders in exploration. ”Things may not happen if people like me, or us, don’t go out and do it,” says Carmack, who’s invested $600,000 so far in Armadillo Aerospace, a leading X Prize contender.

Why they’re pressing:

* Business opportunities. ”It’s almost impossible for someone who has worked in other areas of technology to comprehend how fallow this field is,” says Jeff Greason, CEO of XCOR Aerospace, an X Prize competitor.

NASA dominates the space industry. Its launches take years to plan and cost about $75 million each. Its big, sophisticated equipment, such as the space shuttle and Delta rockets, is designed for complicated missions.

But not every flight needs to be that complex, the tech entrepreneurs say. A rocket that hits a ”suborbital” state, where weightlessness occurs, but quickly returns to Earth is much easier to build and launch than one that can orbit the Earth for days, the entrepreneurs say. Some of the X Prize competitors hope to land on runways, as the space shuttle does. Others plan to fall to Earth while a parachute slows their descent.

Already, travel companies are taking reservations. The most prominent is Space Adventures, which is advertising a flight for $98,000 on a yet-to-be finished short-flight rocket. More than 100 people have signed up, says the Arlington, Va., firm. It helped Wilshire Associates CEO Dennis Tito in 2001 become the first paying space tourist. He paid $20 million to join a Russian space crew for a flight to the International Space Station.

For now, billionaires such as Tito have the best chance of getting to space. But that might change. In 1936, nine passengers paid $1,438 — worth about $18,700 in today’s dollars — to take one of the first long-distance flights, from San Francisco to the Philippines. That was 33 years after the Wright brothers’ first flight and more than three decades before air travel became mainstream. Today, that flight costs around $630. ”It’s taken awhile for this to be viewed as a serious business,” says XCOR’s Greason. ”Five years ago, I didn’t dare breathe the word ‘tourism.’ ”

* Desire to leave a legacy. After hitting it big with PayPal in 2000, Musk, worth about $200 million, says he asked a friend, ”What is the most important thing that we can and should be doing?” He now has two rockets in the works to deliver cargo to space. ”Of all the great things humanity can do, experiencing the stars is one of the greatest, if not the greatest.”

Colossus CEO Klien says the human race runs the risk of someday being wiped out by a catastrophe. As such, he’s invested $100,000 to start the Lifeboat Foundation, which aims to build a self-sustaining space station to float around the Earth.

* It’s a thrill. Carmack, of Doom and Quake fame, used to buy a turbocharged Ferrari each year. In 2000, he started Armadillo Aerospace. He hasn’t bought a Ferrari since. ”When you’re at the top of the field, it’s a gem when you learn something once a month,” Carmack says. ”Jumping to something completely different is really fabulous.”

‘First heroes’

Most of all, engaging in space allows entrepreneurs to fulfill childhood dreams.

Venture capitalist Anousheh Ansari, 36, grew up in Iran. Her family slept outside when it was hot. ”For hours and hours and hours, I would watch the stars,” she says.

Being an astronaut wasn’t an option, so Ansari directed her passion at her start-up, Telecom Technologies. She made her fortune when she sold the company to Sonus Networks in 2000 for about $735 million in stock.

Now, Ansari hopes to invest in a space-related company via her venture capital firm, Prodea. ”I feel like we’re one piece of a much bigger picture, and I’m trying to get a sense of what that big picture is,” she says. ”Space travel will take me one step closer.”

Ken Winans, president of investment management firm Winans International, used his personal wealth — much of it earned during the stock market tech boom — to amass a $100,000 collection of space memorabilia. He turned it into a traveling exhibit for schools and museums.

As a child, he remembers watching astronauts walk on the moon. ”These are my first heroes. The first thing I wanted to do was be an astronaut,” he says.

Space tourist Tito always wanted to go to space. He got an aerospace degree and worked for NASA’s Jet Propulsion Laboratory. Frustrated by low pay, he quit after five years. Eventually, he created a technical way of analyzing financial markets. It made him a millionaire.

”But that dream of going into space never left me,” he says. He often watches the videotapes he took from the space station.

The flight, he says, ”was the most enjoyable and euphoric experience of my entire life,” he says.

Herto fossils clarify modern human origins

Colin Groves explains the implications of the Herto fossils for human evolution and our concept of race.

The recent discovery in Ethiopia of the fossilised remains of two adults and a child by Tim White and his team at the University of California, Berkeley, have pushed back modern human origins to 160,000 years and put another nail in the coffin of the multiregional hypothesis of modern human evolution. But what does it really mean, and what is the context?

For about 30 years the origin of the modern human species has been the subject of much debate. We are divided into well-marked, if overlapping, geographic races:

* Caucasoid people in Europe, North Africa, the Middle East and the Indian subcontinent;

* Mongoloid people in eastern and South-East Asia, the Pacific and the Americas;

* Negroid people in Africa, south of the Sahara; and

* Australoid people in Australia and Melanesia.

Two competing models have attempted to explain the origin of these “major races”: Multiregionalism (or Regional Continuity) and Out-of-Africa (or Replacement).

Until somewhat after two million years before present (BP), proto- humans lived only in Africa, but thereafter they began to spread to other areas. By 1.7 million BP they were at “the gates of Europe” (Dmanisi, in Georgia), by 1.5 million BP they were in Java, and by 500,000 BP (or somewhat more) they were in China and in Europe.

But these were not anatomically modern humans, and their differences are dignified by a plethora of names. Homo ergaster is the African ancestral species, and the enigmatic Dmanisi fossils may also belong to that species. Later African and European fossils, between 600,000 and 300,000 BP, are referred to the species Homo heidelbergensis, and the Europeans who lived from then until some 30,000 BP are the famous Neandertal people (H. neanderthalensis). The Javan species (“Java Man”) is H. erectus, while the Chinese species (“Peking Man”) is either also included in H. erectus or else assigned a separate species, H. pekinensis.

For multiregionalists, each of the modern races is descended in part from one of these archaic regional forms the one living in, or close to, the region to which the modern race is indigenous. For example, they consider that Mongoloids differ from other modern people in some of the same respects that H. pekinensis differed from its contemporary archaics, while Australoids bear a special likeness to Javan H. erectus. At the same time, the model contends that there was gene flow between different regional populations, so that humanity evolved as a whole while each geographic segment retained its own racial features.

If this is how it happened it would be nonsense to divide the fossils into all those different species. All of them even the Neandertals – would rate as archaic H. sapiens.

But all those species are real for the Out-of-Africa school, and most of them are dead-ends. Java Man, Peking Man and the Neandertals arose, flourished and died out without leaving any descendants. The Out-of-Africa school contends that modern humans (H. sapiens) are descended uniquely from African H. heidelbergensis and spread around the globe, replacing the various archaic species that preceded them in different places.

The skull of Homo sapiens idaltu has placed modern human origins in Africa 160,000 years BP.

The skull of Homo sapiens idaltu has placed modern human origins in Africa 160,000 years BP.

Adherents of this school deny the existence of any features linking Peking Man and Mongoloids. For instance, multiregionalists say that shovel-shaped incisors are in common between them even though they are actually found in all the archaic species, not merely in H. pekinensis. The multiregionalists deny that large brow- ridges and a flat, receding forehead are peculiar links between Java Man and Australoids; actually, these features define almost all of the archaic species, including H. heidelbergensis.

When molecular genetics began to impinge on public consciousness in the mid-1980s, it was seen to be supporting the Out-of-Africa model to such an extent that, in the public mind, the debate was between “the fossils” and “the molecules”. This was too stark: it is true that mitochondrial DNA does point to a common human ancestor living in Africa 150-250,000 BP, but there was already a great deal of fossil evidence pointing to human origins in Africa.

Until this year, the earliest H. sapiens appeared to be a fragmentary skull from the Omo River in southern Ethiopia, which was dated (but very provisionally) at 130,000 BP. The next was the skull from Ngaloba in Tanzania, 120,000 BP. Then a set of bones, mainly mandibles, from Klasies River in South Africa dated between 120,000 and 80,000 BP.

Overlapping these at about 115,000 BP are some excellently preserved remains from Qafzeh and Skhul in Israel. While these were not in Africa, they were associated with typical African faunal remains. (It appears that from time to time in the past, as to some extent is the case today, the Levant was part of the African theatre.)

There are also several sets of remains that are anatomically intermediate between H. heidelbergensis and H. sapiens. All of them are from Africa: Guomde (272,000 or 279,000 BP), Florisbad (259,000 + or – 35,000 BP), and two skulls from Jebel Irhoud in Morocco. These are of uncertain date but are thought to be 150-180,000 BP. The more complete of the Irhoud skulls is “almost” H. sapiens, but just outside the modern range, which makes it especially unfortunate that the date is uncertain.

Now we have the new skulls from Herto, in central Ethiopia. There are three: a nearly complete adult, a less complete adult, and a child of 6-7 years of age. They date to 154-160,000 BP and, interestingly for human remains of this early date, they show evidence of sophisticated mortuary practices, including defleshing, while the heads had probably been disarticulated from the bodies, which were not present at the site.

The more complete adult skull (BOU-VP-16/1) is larger than most (and perhaps all) modern human skulls. It has a deep face, very large brow ridges, and a rugged line (the superior nuchal line) demarcating the limit of attachment of the postural muscles. In these features it is said to be outside the range of modern humans as well as of the Skhul and Qafzeh specimens. White and his colleagues describe the Herto type as a separate subspecies, H. sapiens idaltu (“idaltu” means “elder” in the Afar language). The describers did not make any special reference to the Jebel Irhoud skulls but, from the measurements and photos, BOU-VP-16/1 seems remarkably similar to Irhoud 1.

The implications of the Herto find for modern human origins are clear. Here were H. sapiens, more primitive than anyone now living but recognisably members of our own species, living in north- eastern Africa at a time when the Neandertal people were in sole occupation of Europe. Even later than Herto, the only people for whom we have evidence were still non-modern – an enigmatic Neandertal-like skull from Maba in China, and late H. erectus in Java. Just as predicted by the Out-of-Africa model, modern humans appear in Africa long before they are known from anywhere else.

There are implications for the origins of modern races, too. Herto (and Jebel Irhoud) are H. sapiens, but with primitive features. They are not, racially speaking, Africans. The later Omo and Klasies remains are more modern, but they too are archaic, and certainly show no traces of the features that characterise any modern races. Only Qafzeh and Skhul seem to lack these primitive features, and rate as “generalised modern humans”. Our species seems to have existed as an entity long, long before it began to spread outside Africa or the Middle East, let alone split into geographic races.

When, then, did H. sapiens begin to split into races? The evidence indicates that modern racial features developed only gradually in each geographic area. The earliest H. sapiens specimen outside the Africa/Levant region is from Liujiang in China, whose dating was recently confirmed at 67,000 BP by a group led by Guanjun Shen of Nanjing Normal University. Like Qafzeh and Skhul, Liujiang is a “generalised modern”; it has no Mongoloid features.

The East Asian fossil record is not good enough to show when Mongoloid features began to develop. All we can say is that they must have developed before the end of the Pleistocene (12,000 BP) because this is when people began to cross what is now the Bering Strait (which was then a land-bridge); and Native Americans are Mongoloid.

H. sapiens began to enter Europe about 40,000 BP, but it is only at 28,000 BP that we get a fossil that shows any Caucasoid features – the Old Man from Cro-Magnon, in France.

Florisbad (259,000 + or – 35,000 BP) is anatomically intermediate.

Qafzeh (115,000 BP) is a “generalised modern human”.

Within the African homeland, the appearance of Negroid features is debatable. The skull from Border Cave, on the South Africa/ Swaziland border, may be 60,000 years old and may show Negroid features, but both claims have been challenged.

And Australia? The earliest widely accepted dates for human occupation are of the order of 60,000 BP, not more, according to Bert Roberts of La Trobe University and the late Rhys Jones of the Australian National University. The claim that the Mungo Man skeleton is 62,000 BP has recently been challenged. \According to a recent study led by Jim Bowler of Melbourne University, both Mungo Man and Mungo Woman may be only 40,000 years old (AS, April 2003, pp. 18-21), but they are still the earliest skeletal remains we have from Australia. Are they Australoid?

Of all “major races”, Australoids have evidently changed least from the generalised modern human pattern, but the flat, receding forehead and angular skull vault that characterise many full- blooded Aboriginal people today are somewhat different to the Qafzeh/ Skhul pattern. A 1999 study by Susan Anton and Karen Weinstein of the University of Florida, in the process of confirming that some of the Australian fossils (including most of the famous Kow Swamp series) had undergone artificial head deformation in infancy, found unexpectedly that most of the Pleistocene fossil Australian crania are rounder-skulled than modern ones. So racial features developed late in this part of the world, too.

In summary, the new discovery at Herto does not shatter any myths, but it extends the dataset, shifts the weight of evidence yet more decisively in favour of the Out-of-Africa model of modern human evolution, and helps to place modern racial variation very firmly into context.

A MATTER OF RACE

How many human races are there? If we look at the major geographically varying characters – hair form, skin colour, body build, facial features, and some cranial and dental features – there are four wide areas over which at least some of these characters vary more or less concordantly. These are:

* Sub-Saharan Africa, where people have “woolly” hair and tend to have elongated limbs, a wide and flat nose, and subnasal prognathism (the jaws project below the nose);

* Europe, North Africa, western Asia and the Indian subcontinent, whose people have wavy or somewhat curly hair, sharp facial features (especially a narrow, prominent nose), and abundant facial and bodily hair;

* Eastern Asia, the Pacific and the Americas, where indigenous people have straight hair, a yellow tinge to the skin, facial flatness (flat nose and forward-standing cheekbones), so-called shovel-shaped incisors, and short limbs; and

* Australia and Melanesia, whose indigenous people have very elongated limbs and prominent brow ridges.

Some anthropologists consider these to be “the major races”, and the terms Negroid (or Afrotropical), Caucasoid (Caucasian), Mongoloid and Australoid (or Austromelanesian) have been applied to them. To an extent these do represent recognisable geographic clusters whose skulls and dentitia are usually recognisable.

But each race is very heterogeneous. Skin colour plays a very minor role here: noticeably, the Bushmen of Namibia and Botswana are much lighter than most sub-Saharans, and some Indians (especially in the south) are as black as many Africans but are as Caucasian as John Howard.

The term “Caucasian” is widely misunderstood. Most westerners think it is a polite term for “white”. The term “Asian” should not be used in a racial sense – the Indian subcontinent and the Middle East are part of Asia, but Indians, Iranians and Arabs are Caucasians.

Colin Groves is professor of archaeology & anthropology at the Australian National University.

Copyright Control Publications Pty Ltd Aug 2003

Naked ape: Humans lost body hair long before finding clothes

Naked ape: Humans lost body hair long before finding clothes, scientists say

By NICHOLAS WADE New York Times

Tuesday, August 19, 2003

One of the most distinctive evolutionary changes as humans parted company from their fellow apes was their loss of body hair. But why and when human body hair disappeared, together with the matter of when people first started to wear clothes, are questions that have long lain beyond the reach of archaeology and paleontology.

Ingenious solutions to both issues have now been proposed, independently, by two research groups analyzing changes in DNA. The result, if the dates are accurate, is something of an embarrassment. It implies that we were naked for more than a million years before we started wearing clothes.

Alan R. Rogers, an evolutionary geneticist at the University of Utah, has figured out when humans lost their hair by an indirect method depending on the gene that determines skin color. Mark Stoneking of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, believes he has established when humans first wore clothes. His method, too, is indirect: It involves dating the evolution of the human body louse, which infests only clothes.

Meanwhile a third group of researchers, resurrecting a suggestion of Charles Darwin’s, has come up with a novel explanation of why humans lost their body hair in the first place.

Mammals need body hair to keep warm, and lose it only for special evolutionary reasons. Whales and walruses shed their hair to improve speed in their new medium, the sea. Elephants and rhinoceroses have specially thick skins and are too bulky to lose much heat on cold nights. But why did humans, the only hairless primates, lose their body hair?

Mark Pagel of the University of Reading in England and Walter Bodmer of the John Radcliffe Hospital in Oxford have proposed a solution to the mystery and their idea, if true, goes far toward explaining contemporary attitudes about hirsuteness.

Humans lost their body hair, they say, to free themselves of external parasites that infest fur — blood-sucking lice, fleas and ticks and the diseases they spread.

Once hairlessness had evolved through natural selection, Pagel and Bodmer suggest, it then became subject to sexual selection, the development of features in one sex that appeal to the other.

Among the newly furless humans, bare skin would have served, like the peacock’s tail, as a signal of fitness. The pains women take to keep their bodies free of hair — joined now by some men — may be no mere fashion statement but the latest echo of an ancient instinct.

Pagel’s and Bodmer’s article appeared in a recent issue of The Proceedings of the Royal Society.

Some experts could take some convincing. “There are all kinds of notions as to the advantage of hair loss, but they are all just-so stories,” said Ian Tattersall, a paleoanthropologist at the American Museum of Natural History in New York.

Answer may lie in pigment

Causes aside, when did humans first lose their body hair?

Rogers saw a way to get a fix on the date after reading an article about a gene that helps determine skin color. The gene, called MC1R, specifies a protein that serves as a switch between the two kinds of pigment made by human cells. Eumelanin, which protects against the ultraviolet rays of the sun, is brown-black; pheomelanin, which is not protective, is a red-yellow color.

As soon as the ancestral human population in Africa started losing its fur, Rogers surmised, people would have needed dark skin as a protection against sunlight. Anyone who had a version of the MC1R gene that produced darker skin would have had a survival advantage, and in a few generations, this version of the gene would have made a clean sweep through the population.

There may have been several clean sweeps, each one producing a more effective version of the MC1R gene.

From the number of silent mutations in African versions of the MC1R gene, Rogers and two colleagues, David Iltis and Stephen Wooding, calculate that the last sweep probably occurred 1.2 million years ago, when the human population consisted of a mere 14,000 breeding individuals.

In other words, humans have been hairless at least since this time, and maybe for much longer. Their article is to appear in a future issue of Current Anthropology.

Lice provide a clue

Remarkable as it may seem that genetic analysis can reach back and date an event deep in human history, there is a second approach to determining when people lost their body hair or at least started to wear clothes. It has to do with lice.

Humans have the distinction of being host to three different kinds: the head louse, the body louse and the pubic louse. The body louse, unlike all other kinds that infect mammals, clings to clothing, not hair. It presumably evolved from the head louse after humans lost their body hair and started wearing clothes.

Stoneking, together with Ralf Kittler and Manfred Kayser, report in today’s issue of Current Biology that they compared the DNA of human head and body lice from around the world, as well as chimpanzee lice as a point of evolutionary comparison.

From study of the DNA differences, they find that the human body louse indeed evolved from the louse, as expected, but that this event took place surprisingly recently, sometime between 42,000 and 72,000 years ago. Humans must have been wearing clothes at least since this time.

The head louse would probably have colonized clothing quite soon after the niche became available — within thousands and tens of thousands of years, Stoneking said. So body lice were probably not in existence when humans and Neanderthals diverged some 250,000 or more years ago. This implies that the common ancestor of humans and Neanderthals did not wear clothes and, therefore, Neanderthals probably didn’t either.

Polar Bear Turns Purple After Medication

Paint the polar bear purple and the crowds will come.

That seems to be the lesson a zoo in Mendoza has learned, after its 23-year-old bear Pelusa was sprayed with an antiseptic spray that turned her normally white fur a dark shade of violet.

The unusual color – a temporary side effect of the treatment for dermatitis – has turned the aging bear into a minor celebrity in Argentina and prompted thousands of schoolchildren and tourists to make their way to the Jardin Zoologico de Mendoza in the western city beneath the snow-capped Andes.

“We never thought she would get all the attention she’s now receiving,” veterinarian Alberto Duarte told The Associated Press when reached by telephone in Mendoza, 640 miles west of Buenos Aires. “We’ve had calls from Spain and e-mails from around the world asking about the bear.”

The newspaper Los Andes, of Mendoza, reported that Pelusa’s new look has turned her into the zoo’s most popular attraction, surpassing giraffes Tommy and Belen.

The spray applied to Pelusa is similar to one used by pediatricians to treat children’s scraped knees or lab technicians to stain micro-organisms for examination under microscopes.

Pelusa, a 395-pound bear has been temporarily placed in a cage because of the treatment, and is separated from her mate, Arturo. She is also kept back a distance from the public.

The separation, Duarte said, was needed to keep Pelusa from taking her regular plunge into an icy pool of water at the polar bear compound. That would have washed away the medicine prematurely, he said.

The isolation has not seemed to bother Pelusa but it has left Arturo, a 16-year-old male almost double the weight of his mate, a bit grumpy, Duarte said.

After all, the two – who have been together for years – have been kept apart for 20 days.

“The only one a bit anxious is Arturo, but they’ll be back together soon,” Duarte said. “Pelusa’s condition is improving and in one more week we will stop with the antiseptic and she’ll again be able to take her normal baths.”

He added that once Pelusa begins to swim again, “she will lose her violet color” quickly.

Until then, though, she’ll have to endure the crowds peering into her pen and possibly even the occasional paparazzi looking to snap a shot or two.

The effects of physical attractiveness on job-related outcomes

We report the findings of a meta-analytic review of experimental studies concerned with the biasing effect of physical attractiveness on a variety of job-related outcomes. In support of implicit personality theory, attractive individuals were found to fare better than unattractive individuals in terms of a number of such outcomes. The weighted mean effect size, d, was .37 for all studies. In addition, tests for moderating effects showed that (a) the attractiveness bias did not differ between studies that provided low versus high amounts of job-relevant information about the targets, (b) the same bias was greater for within-subjects research designs than for between-subjects designs, (c) professionals were as susceptible to the bias as were college students, (d) attractiveness was as important for men as for women, and (e) the biasing effect of attractiveness has decreased in recent years. Implications of these findings are considered.

Over the past few decades, numerous individual studies and several meta-analytic reviews have shown that physical attractiveness is important in the U.S. More specifically, the same studies have demonstrated that there is a positive correlation between physical attractiveness (referred to hereinafter as attractiveness) and a host of outcomes. For example, attractiveness has been shown to influence, among other variables, initial impressions (Eagly, Ashmore, Makhijani, & Longo, 1991; Feingold, 1992; Jackson, Hunter, & Hodge, 1995), date and mate selection decisions (e.g., Adams, 1977), helping behavior (e.g., Benson, Karabenick, & Lerner, 1976), teacher judgments of student intelligence and future academic potential (e.g., Ritts, Patterson, & Tubbs, 1992), voters’ preferences for political candidates (e.g., Adams, 1977), and jurors’ judgments in simulated trials (Mazzella & Feingold, 1994). Moreover, the results of a recent meta-analysis (Langlois, Kalakanis, Rubenstein, Larson, Hallam, & Smoot, 2000) have shown that the effects of physical attractiveness are robust and pandemic, extending beyond initial impressions of strangers to actual interactions with people. Langlois et al. (2000) further concluded that the benefits of attractiveness are large enough to be “visible to the naked eye,” and that they are of considerable practical significance.

The benefits of attractiveness have also been shown in the occupational domain. Evidence of an attractiveness bias in work settings has been reported in a number of non-meta-analytic, narrative reviews (e.g., Bull & Rumsey, 1988; Jackson, 1992; Morrow, 1990; Stone, Stone, & Dipboye, 1992). Overall, what these reviews suggest is that relative to less attractive individuals, attractive people tend to fare better in terms of such criteria as perceived job qualifications (e.g., Dipboye, Fromkin, & Wiback, 1975; Quereshi & Kay, 1986), hiring recommendations (e.g., Cann, Siegfried, & Pearce, 1981; Gilmore, Beehr, & Love, 1986), predicted job success (Morrow, McElroy, Stamper, & Wilson, 1990), and compensation levels (e.g., Frieze, Olson, & Russell, 1991; Roszell, Kennedy, & Grabb, 1989). In addition, sex differences have been observed in the effects of attractiveness on job-related outcomes. However, the direction of such differences has been equivocal (e.g., Jackson, 1992). Furthermore, a variety of factors (e.g., occupational sex- linkage, job type) have been shown to moderate the relationship between attractiveness and job-related outcomes (e.g., Jackson, 1992).

Given a bias against physically unattractive individuals on a variety of job-related outcomes, Stone et al. (1992) argued that attractiveness is an important factor that deserves more attention than it has received thus far in organizational research. In addition, several researchers (e.g., Morrow et al., 1990, Stone et al., 1992) asserted that although attractiveness may not be the most important determinant of personnel decisions, it may be the deciding factor when decision makers are faced with difficult choices among job applicants or incumbents who possess similar levels of qualifications or performance.

In view of these considerations and Langlois et al.’s (2000) conclusion that attractiveness effects have practical significance, we conducted a meta-analysis concerned with the attractiveness bias in simulated employment contexts. We used implicit personality theory (e.g., Ashmore, 1981) and a lack of fit model (Heilman, 1983) to explain how attractiveness influences job-related outcomes. These theories were used because they offer predictions not only about the relationship between attractiveness and job-related outcome variables, but also about potential moderators of the same relationship. However, we agree with Langlois et al.’s (2000) assertion that no single theory is likely to offer a complete explanation of attractiveness effects; instead, extant theories should be viewed as complementary, rather than competitive in explaining such effects.

The following section provides a summary of social cognition perspectives on the effects of stereotypes on person perception. In addition, it offers brief explanations of implicit personality theory and the lack of fit model as well as predictions stemming from these two theoretical perspectives.

Theoretical Perspectives on the Relationship Between Physical Attractiveness and Job-Related Outcomes

Research on social cognition (e.g., Fiske & Neuberg, 1990; Fiske & Taylor, 1991; Hamilton, Stroessner, & Driscoll, 1994) shows that individuals initially categorize a target person on the basis of available physical cues (e.g., race, sex, attractiveness, age). Once categorized, expectations associated with the category are activated, and the target person is judged on the basis of these category-based expectations. Both implicit personality theory (e.g., Ashmore, 1981) and the lack of fit model (Heilman, 1983) assume that attractiveness evokes stereotype-based expectations and that individuals are evaluated on the basis of such expectations.

Information Used in Impression Formation

When individuals form impression of others, they typically rely on two types or sources of information: (a) knowledge of a target’s category membership, and (b) details of his or her individuating characteristics (Pendry & Macrae, 1994). Because of this, one focus of impression formation research has been on determining the extent to which these two types of information affect ultimate impressions of targets (Pendry & Macrae, 1994). Two impression formation models, that is, the continuum model (Fiske & Neuberg, 1990) and dual process model (Brewer, 1988), have guided research on this issue. Both such models agree that perceivers initially categorize a target on the basis of readily apparent physical cues (Pendry & Macrae, 1994). However, Brewer’s (1988) dual process model suggests that perceivers choose between two alternative processing modes, that is, category-based and person-based. Perceivers use person-based strategies if they are motivated to attend to the target, but choose category-based strategies if the target is of little interest to them (Pendry & Macrae, 1994).

In contrast, Fiske and Neuberg’s (1990) continuum model suggests that individuals’ impressions of targets fall along a single impression formation continuum. At opposite ends of it are (a) stereotype-based evaluations of individuals and (b) individuated responses to individuals. The model assumes that (a) stereotype- based responses have priority over individuated judgments, and (b) movement along the continuum, from stereotype-based to individuated responses, is a function of interpretational, motivational, and attentional factors (Pendry & Macrae, 1994). Moreover, in general, factors that increase the cost of forming incorrect impressions (e.g., outcome dependency, motivation to be accurate, accountability for judgment, self-presentational concerns, fear of invalidity) motivate perceivers to use individuating response strategies, but factors that increase the cost of being indecisive (e.g., time pressure, need for closure, cognitive load) motivate perceivers to rely on stereotype-based strategies (Fiske & Taylor, 1991; Pendry & Macrae, 1994).

Implicit Personality Theory

According to Ashmore (1981; Ashmore & Del Boca, 1979; Ashmore, Del Boca, & Wohlers, 1986), implicit personality theory is a hypothetical cognitive structure that comprises personal attributes (e.g., personality traits) and the set of expected relations (i.e., inferential relations) between and among them. Stereotypes are implicit personality theories in which group membership is one of the personal attributes that is associated inferentially with other personal attributes (Ashmore, 1981). Eagly et al. (1991) applied implicit personality theory to understand the physical attractiveness stereotype, and argued that the social categories of “attractive” and “unattractive” were linked inferentially to a variety of personality dimensions.

Substantial empirical evidence and three meta-analyses have firmly established the existence and validity of a “what-is- beautiful-is-good stereotype” (e.g., Dion, Berscheid, & Walster, 1972; Eagly et al, 1991; Feingold, 1992; Jackson et al., 1995). For example, meta-analyses by Eagly et al. (1991) and Feingold (1992) showed that attractiveness has (a) a strong effect on perceptions of social competence, social skills, and sexual warmth, (b) a moderate effect on perceptions of intellectual competence, potency, adjustment, dominance, and general mental health, and (c) a weak effect on perceptions of integrity and concern for others. In addition, sex-of-target differences were observed for the perceptions of sexual warmth and intellectual competence. More specifically, the effects of attractiveness on perceptions of sexual warmth were stronger for women than for men (Feingold, 1992). However, the effects of attractiveness on perceptions of intellectual competence were stronger for men than for women (Jackson et al., 1995).

Furthermore, more recent meta-analyses (Langlois et al., 2000) have shown that (a) following actual interaction with others, perceivers judge attractive individuals more positively (e.g., in terms of interpersonal competence, occupational competence, social appeal, adjustment) and treat them more favorably (e.g., visual/ social attention, positive interaction, reward, help/cooperation, acceptance) than less attractive individuals, and (b) attractive individuals experience more positive outcomes in life (e.g., occupational success, popularity, dating experience, sexual experience, physical health) than less attractive individuals.

Thus, implicit personality theory predicts that as a result of a generally positive stereotype associated with attractiveness, decisions makers (e.g., employment interviewers, managers) will judge attractive individuals more positively than less attractive individuals. Consistent with this, our meta-analysis tested the following hypothesis:

Hypothesis 1: Attractive individuals will be judged and treated more positively with regard to job-related outcomes than unattractive individuals.

Lack of Fit Model

Heilman (1983) originally developed a lack of fit model to explain occupational sex bias and applied this model to explain the attractiveness bias in the workplace. According to the same model, a perceiver makes inferences about attributes and characteristics of an individual based upon stereotypes (e.g., sex, attractiveness), and then evaluates the individual on the degree to which these attributes match the perceived requirements of a job. A bias results when there is a poor fit between the perceived attributes of an individual and the perceived requirements of a job. The larger the incongruity between these two perceptions, the more that failure is anticipated and the greater the resulting bias.

Attractiveness has been shown to exaggerate the perception of sex- typing (e.g., Gillen, 1981; Heilman & Saruwatari, 1979, Heilman & Stopeck, 1985a). That is, attractive men are believed to possess more of traditionally masculine qualities than less attractive men, and attractive women are believed to possess more of traditionally feminine qualities than less attractive women. According to the lack of fit model, it follows that whether attractiveness is an asset or a liability depends on both the sex of targets and sex-type of job: Attractiveness will be a hindrance for women in stereotypically masculine jobs because (a) attractive women are seen as possessing more feminine traits than less attractive women, (b) stereotypically masculine traits are assumed to be a requisite for success in masculine jobs, and, thus, (c) highly feminine women are not viewed as being suitable for masculine jobs (i.e., there is a lack of fit between the two). Thus, the lack of fit model predicts that for stereotypically masculine jobs, attractiveness should be an asset for men but not for women. Conversely, for stereotypically feminine jobs, attractiveness should be an asset for women but not for men. Thus, the same model predicts that attractiveness will interact with sex and the sex-type of job to influence job-related outcomes.

Heilman and her colleagues (Heilman & Saruwatari, 1979; Heilman & Stopeck, 1985a) have shown that although attractiveness is a liability for women who apply or hold stereotypically masculine jobs, men are not affected by a lack of fit because attractive men are seen as having the potential to be successful in either masculine or feminine jobs. However, it should be noted that Heilman’s studies are the only ones that have reported the adverse effects of attractiveness for women (Jackson, 1992).

Despite empirical evidence that attractiveness enhances the perception of sex-typing (e.g., Gillen, 1981; Heilman & Saruwatari, 1979), sex-typed traits (e.g., masculinity, femininity) were not included in meta-analytic reviews by Eagly et al. (1991), Feingold (1992), or Langlois et al. (2000). Thus, Jackson (1992) contended that it may be unjustified and misleading to conclude that men and women of similar attractiveness are similarly perceived and judged, and suggested that attractiveness might have different effects for men and women when sex-typed traits are relevant to judgments. In view of the foregoing, our meta-analysis tested the following hypothesis:

Hypothesis 2: Attractiveness will interact with sex and sex- typing of job in affecting job-related outcomes. In particular, attractiveness will be a liability for women who apply for or hold a stereotypically masculine job.

Additional Predictions

Job-relevant information. Research has shown that stereotypes have their greatest influences on judgments or evaluation when the amount and type of information provided about a target is limited (e.g., Locksley, Borgida, Brekke, & Hepburn, 1980; Locksley, Hepburn, & Ortiz, 1982). However, it has also been shown that individuals place little or no reliance on stereotypes when information about the target is clearly and unambiguously judgment relevant (Fiske & Taylor, 1991). For example, a meta-analysis by Tosi and Einbender (1985) showed that sex bias was greatly reduced when more job-relevant information was provided than when less job- relevant information was provided. In view of these findings, we tested the following hypothesis.

Hypothesis 3: The effects of attractiveness on various job- related outcomes will be stronger when individuals lack job- relevant (individuating) information about the target than when they have such information.

Within- versus between-subjects research designs. Eagly et al. (1991) demonstrated that attractiveness effects were stronger for within-subjects designs than for between-subjects designs. They argued that exposure to multiple individuals of differing levels of attractiveness in within-subjects studies probably induces a perceptual contrast effect in which attractive individuals are seen as more attractive and unattractive ones as less attractive than they would be otherwise; that is, perceptual contrast effects lead to more extreme attractiveness-based judgments. Moreover, Olian, Schwab, and Haberfeld (1988) reported that within-subjects designs resulted in stronger gender discrimination effects than between- subjects designs. Therefore, our meta-analysis tested the following hypothesis:

Hypothesis 4: The effects of attractiveness on various job- related outcomes will be stronger for research using within- subjects designs than for research using between-subjects designs.

Research Questions

Our meta-analysis also provided answers to several research questions. These are considered below.

Type of study participant/type of research setting. A frequent criticism of experimental research concerned with personnel-related decision making is that because such research has relied primarily on the use of college students as participants, its results may not be generalizable to people who actually make personnel decisions in organizational contexts (e.g., Morrow et al., 1990). For example, although research by Bernstein, Hakel, and Harlan (1975) showed that students’ ratings of job applicants were nearly identical to those of professional interviewers, Gordon, Slade, and Schmitt (1986) argued that college students were unacceptable surrogates for real decision makers. However, in view of the fact that the attractiveness bias has been reported in realistic settings with real decision makers (e.g., Frieze et al., 1991; Roszell et al., 1989), it appears to be other than an artifact of laboratory-based research using student participants (Stone et al., 1992). Consistent with this view, Olian et al.’s (1988) meta-analysis on gender discrimination in hiring decisions in simulated employment settings found that professional decision makers were as susceptible to gender discrimination as were students.

In view of the above, we have no reason to expect other than a positive relationship between attractiveness and various job- related outcomes across two types of study participants: (a) organizational decision makers and (b) student surrogates, that is, students performing in the role of organizational decision makers. However, it may be the case that the strength of this relationship differs somewhat across these two types of research participants. Thus, we explored this issue through meta-analytic means. More specifically, we addressed the following research question:

Research Question 1: Does the strength of the effect of attractiveness on various outcomes differ between organizational decision makers and students operating in the role of organizational decision makers?

It deserves adding that laboratory-based experimental research on attractiveness typically involves the use of student participants, whereas field-based experimental research almost always involves actual personnel specialists. Thus, the answer to the above question also provides suggestive evidence on the strength of the relationship between attractiveness and job-related outcomes across research contexts (i.e., laboratory experiments versus field experiments). However, it is important to recognize that if an effect is found for research setting (i.e., the variable that we were able to code in the meta-analysis), the actual effect is no doubt attributable to the type of research participant (i.e., students versus organizational decision makers).

Sex of target. The available evidence (Eagly et al., 1991; Feingold, 1992) shows that the strength of the attractiveness stereotype does not differ as a function of the sex of targets. Furthermore, Langlois et al. (2000) reported that the strength of attractiveness effects for judgment and treatment was similar for both male and female targets. However, Jackson et al. (1995) found that attractiveness had stronger effects on perceptions of the intellectual competence of men than of women. According to Jackson et al., these inconsistent findings might be due to the fact that Eagly et al. (1991) and Feingold (1992) excluded studies of perceived and actual competence in the occupational domain, a domain stereotypically associated with a male, whereas Jackson et al. (1995) included studies that measured intellectual competence in the occupational domain. Therefore, we examined if there is a sex difference in the strength of the attractiveness bias in organizational settings. More specifically, we addressed the following research question:

Research Question 2: Does the magnitude of the attractiveness bias differ as a function of target sex?

Publication period. The majority of studies on attractiveness have been published during 1970s and 1980s. However, it is not known if the magnitude of the attractiveness effect has changed over the years. The magnitude of the same effect on employment-related decisions might have remained relatively constant over the years because the implicit theories of decision makers about the correlates of attractiveness may have remained relatively constant over time. Alternatively, the magnitude of the effect might have decreased over the years because decision makers might have become aware of the attractiveness bias and manifested a lesser willingness to allow it to serve as a basis for decision making. Therefore, we explored this issue in our meta-analysis. We did so by assessing the extent to which the relationship between attractiveness and various outcomes differs over several 5-year periods (see below). Note that we used 5-year intervals, as opposed to 1-year intervals, because of the fact that the greater the number of observations considered by estimates of central tendency (i.e., of effect size), the greater the stability of such estimates. The study question that we addressed was:

Research Question 3: Does the magnitude of the attractiveness effect differ across 5-year time intervals?

Type of job-related outcomes. Finally, as mentioned above, Eagly et al. (1991) and Feingold (1992) found that the strength of the physical attractiveness stereotype varied as a function of evaluative beliefs. Relatedly, we explored the issue of whether the effect of the attractiveness bias differed as a function of type of the job-related outcome (e.g., hiring, promotion, performance evaluation). More specifically, our meta-analysis addressed the following research question:

Research Question 4: Does the magnitude of the attractiveness bias effect differ across various job-related outcomes?

Contributions of the Present Meta-Analysis

It deserves stressing that our meta-analytic review is not redundant with meta-analytic reviews conducted by Jackson et al. (1995), and by Langlois et al. (2000), both of which included studies that examined the effects of attractiveness in the occupational domain. More specifically, the meta-analysis of Jackson et al. showed that attractiveness had a strong effect on the perceptions of intellectual competence. Although their measures of intellectual competence included both academic and occupational competence, they failed to compute separate effect size estimates for these two types of competence. Likewise, Langlois et al. (2000) reported that attractiveness had a strong effect for judgments of occupational competence in actual interactions. However, their effect estimates were based on a rather small number (i.e., five) of studies. In contrast, the present meta-analysis used results from a larger number of studies. Furthermore, unlike Jackson et al. (1995) and Langlois et al. (2000), our meta-analysis used a broader set of job-related outcomes, including selection, performance evaluation, and hiring decisions.

Method

Sample of Studies

Two basic procedures were used to obtain the data upon our meta- analysis was based. First, computer-based searches were conducted using the keyword physical attractiveness, combined with such keywords as selection, evaluation, promotion, management, professional, job applicant, and performance evaluation in the following computerized data bases for the periods noted: PsychINFO (Psychological Abstracts; 1967 to 2000), Sociological Abstracts (1963 to 2000), and ERIC (Educational Resources Information Center, 1966 to 2000). Second, we searched the reference lists of all the primary studies, review articles (e.g., Eagly et al., 1991; Morrow, 1990; Stone et al., 1992), and books (e.g., Bull & Rumsey, 1988; Hatfield & Sprecher, 1986; Jackson, 1992; Langlois, 1986) concerned with attractiveness issues. The initial search produced 76 studies for potential inclusion in the meta-analysis.

In order for studies to be included in our meta-analysis, we used two decision rules: (a) the attractiveness of the target had to be a manipulated variable, and (b) one or more of the study’s dependent variables had to be a rating of the target on outcomes concerned with either access to jobs (e.g., hiring decisions, qualification ratings) or job-related treatment (e.g., promotions, performance evaluation). We used these criteria because we were interested in assessing the causal relationship between physical attractiveness and a variety of job-related outcomes. In view of the same criteria, a number of primary studies were excluded from the meta-analysis because they did not manipulate the physical attractiveness of targets (e.g., Dickey-Bryan, Lautenschlager, Mendoza, & Abrahams, 1986; Raza & Carpenter, 1987; Riggio & Throckmorton, 1988; Roszell et al., 1989; Udry & Eckland, 1984). Furthermore, studies that focused on other aspects of job-related evaluations (e.g., causal attributions; Heilman & Stopeck, 1985b) were eliminated. As a result of using the just-noted selection criteria, the initial pool of 76 primary studies was reduced to 27.

Coding of Study-Related Variables

For each primary study, we coded the following information: (a) sex of target (i.e., male, female, both), (b) sex-type of job (i.e., masculine, feminine, neutral, unknown), (c) the combination of sex of target and sex-type of job (i.e., target/sex types of male/ masculine, male/feminine, male/neutral, female/masculine, female/ feminine, female/neutral), (d) relevance of job information (i.e., low, high), (e) type of research participant (i.e., college student, professional, both), (f) setting of the experiment (i.e., laboratory, field, both), (g) type of research design (i.e., within- subjects, between-subjects), (h) type of job-related outcome (i.e., suitability ranking, hiring decision, promotion decision, predicted success, suitability rating, employment potential, choice as a business partner, performance evaluation), (i) publication year and publication period (i.e., 1975-1979, 1980-1984, 1985-1989, 1990- 1994, 1995-1999), and (j) evidence on the effectiveness of the attractiveness manipulation (i.e., pretest of manipulation and manipulation check, pretest of manipulation and no manipulation check, no clear information on pretest). These variables were coded separately by the first and third authors. The coding procedure revealed that there was nearly complete (about 98%) agreement on virtually all coding. The few instances of disagreement were resolved by discussion.

Two other issues related to the coding of studies deserve consideration. The first has to do with the way we coded the combination of the sex of target and sex-type of a job. In many studies, information about the sex-type of a job was obvious from the manipulations that were used. When the information about the sex- type of a job was not available, the first and third authors coded it separately. There was complete agreement on the sex-types of jobs.

The second issue concerns the coding of the relevance of job information that was provided to study participants. We coded a study as providing low job-relevant information when participants were given information that was not relevant or useful in making job- related decisions (e.g., an applicant’s hobbies). We coded a study as providing high job-relevant information when participants were provided with information that was relevant in making job-related decisions (e.g., relevant past work experience, interview transcripts, performance reviews, relevant college major). Note that the judgment about the relevance of information was determined by the quality of information provided to participants, not by the amount of information provided.

Computation and Analysis of Effect Size Estimates

The initial step in the meta-analysis was to compute a g, standard effect size estimate (Hedges & Olkin, 1985). In this study, g is the difference between the means of physically attractive and less attractive groups on outcomes divided by the relevant denominator for the effect size estimate: For any given primary study, this was either (a) the pooled standard deviation when attractiveness was a between-subjects variable, or (b) the standard deviation of the differences when attractiveness was a within- subjects variable. The sign of the difference between means was positive when attractive targets were rated more positively on job- related outcomes than less attractive targets and negative when less attractive targets were rated more positively than attractive targets on the same outcomes.

Although an effort was made to extract as much informatio

as possible from each primary study, not all the studies reported the information needed for specific meta-analytic comparisons: For example, some studies had targets of only one sex. In addition, other studies reported only the main effects of attractiveness, as opposed to interactive effects of attractiveness, sex, or sex-type of job. Therefore, based on the 27 studies, we were able to compute 62 g estimates. These estimates were based upon (a) means and standard deviations for 41 effects, (b) F statistics for 14 effects, (c) proportions for two effects, (d) t statistics for two effects, (e) significance levels (p values) for two effects, and (f) a chi square statistic for one effect.

Analysis of effect size estimates. Because the g index tends to overestimate the magnitude of population effect size, especially when samples are small, the gs derived from the primary study data were converted to ds (Hedges & Olkin, 1985). These ds were then combined to estimate both (a) unweighted mean effect size estimates and (b) sample size weighted mean effect size estimates. In addition, a homogeneity statistic, Q (Hedges & Olkin, 1985), was calculated to determine if each set of ds shared a common population effect size, that is, the effect size estimates were homogeneous or consistent across the studies. Q has a distribution that is approximately chi-square with k-1 degrees of freedom, where k is the number of effect size estimates (Hedges & Olkin, 1985).

In cases where the Q statistic suggested a lack of effect size homogeneity, we sought study characteristic correlates (moderators) of the effect size indices (ds). More specifically, a categorical model was used to determine the relation between the study attributes (as categories) and the magnitude of effect size estimates (Hedges & Olkin, 1985). Categorical models provide (a) a between-category effect size estimate that is analogous to a main effect in analysis of variance, and (b) a test of the homogeneity of the effect size estimates within each category (see Hedges & Olkin, 1985, for computational details).

The between-category effect is estimated by Q^sub B^, which has an approximate chi-square distribution with p-1 degrees of freedom, where p is the number of classes. The homogeneity of the effect size estimates within each category (i) is estimated by Q^sub Wi^, which has an approximate chi-square distribution with m-1 degrees of freedom, where m is the number of effect size estimates in the category. Tests of categorical models also provide estimates of the mean weighted effect size and 95% confidence intervals for each category. The latter estimates can be used to determine if within- category effect size estimates differ from zero.

Other analysis-related issues. As noted above, our meta-analysis used information on 62 effect sizes derived from 27 articles. Thus, two issues merit consideration. First, when effect sizes for two or more dependent variables from a given study are used in a meta- analysis, to the degree that the dependent variables (e.g., job- related outcomes) are correlated (nonindependent), the use of the multiple effect size estimates (as opposed to a composite of the variables) will result in an underestimate of the relationship between an independent variable (e.g., attractiveness) and the composite dependent variable (see Rosenthal & Rubin, 1986). Note that this issue is not relevant to the present meta-analysis. Although it considered several different types of outcomes, for any given primary study, there was no instance in which we used effect size estimates for more than a single outcome for any given primary study.

The second issue that deserves consideration is the possible difference in sample-size weighted versus unweighted effect size estimates. To the degree that these estimates differ from one another, concerns might arise about (a) the appropriateness of the estimate of the average effect size for the set of studies and (b) the statistical test for significance of the average effect size (Hunter, Schmidt, & Jackson, 1982). For example, if a large number of large effect size estimates were derived from studies that had small sample sizes, the average unweighted sample size effect size estimate for the set of studies would be upwardly biased. Thus, one strategy for assessing whether this is a problem is to compare the sample-size weighted effect size estimate against its unweighted counterpart.

Results

Tests of Hypotheses

Overall effect of attractiveness. The d values for each of the 62 effect size estimates derived from the primary studies are listed in Table 1. Table 2 summarizes the results for the effects of attractiveness on the job-related outcomes.

As can be seen in Table 1, for the 62 effect size estimates there was a positive relationship between attractiveness and the measured outcomes in 55 of 62 instances. In addition, as the results in Table 2 reveal, for the set of 62 effects the weighted and the unweighted mean effect size estimates were .37 and .37, respectively: Attractive individuals fared better than their less attractive counterparts in terms of a variety of job-related outcomes. Note, moreover, that the 95% confidence around the d of .37 extended from .32 to .41. The fact that the same interval does not contain the value of 0 indicates that the overall attractiveness effect is not chance based. These findings provide strong support for Hypothesis 1.

TABLE 1

Effect Size Estimates (d) for Each Study

TABLE 1

TABLE 1

TABLE 2

Overall Effect of Attractiveness

As shown in Table 2, the Q statistic for the set of 62 effect size estimates (i.e., 176.02, p A meta-analysis based upon the exclusion of the eight outliers revealed a mean sample-size weighted effect size estimate of .34, and a 95% confidence interval extending from .29 to .39 (see Table 2). However, in view of the facts that (a) the mean weighted effect sizes for the analysis involving 62 effect sizes (mean d = .37) and the analysis excluding the outliers (mean d = .34) did not differ greatly from one another, and (b) the confidence intervals for the same two sets of effect sizes overlapped greatly, we concluded that the weighted mean effect size for attractiveness falls within an interval that extends from .34 to .37.

As can be seen in Table 3, the strength of the attractiveness bias did not differ as a function of the combination of sex of target and sex-type of job, Q = 3.56, p > .05. Our results show that physical attractiveness is always an asset for both male and female targets, regardless of the sex-type of the job for which they applied or held. The effect size estimates for attractiveness were all positive for both male and female targets, and were in between the values of .30 and .45. Thus, the present study’s results failed to provide support for Hypothesis 2.

Job-relevant information. The strength of the attractiveness effect did not vary as a function of the presence of job-relevant information, Q = 3.49, p = .06. Failing to provide support for Hypothesis 3, the mean weighted effect size estimates showed the attractiveness bias did not differ between the low job-relevant information (d = .44) and the high job-relevant information (d = .34) conditions. Note, however, that even though the difference in effect sizes is not statistically significant, the pattern of means is consistent with Hypothesis 3.

TABLE 3

The Attractiveness Bias and Primary Study Attributes

TABLE 3

Type of research design. Table 3 shows the results of the analyses concerned with the moderating effects of type of research design. Consistent with Hypothesis 4, the attractiveness effect varied as a function of research design: The mean effect size estimate was larger for within-subjects designs (d = .40) than for between-subjects designs (d = .26), Q = 5.82, p Type of study participant. A test for the moderating effect of type of study participant (i.e., student vs. professional) on mean effect size estimates showed that this variable had no such effect. As can be seen in Table 3, mean weighted effect size estimates did not differ meaningfully between students (d = .40) and professionals (d = .31), Q = 3.88, p > .05. These results provide a clear answer to Research Question 1.

Note, moreover, that, because all of the experiments that involved college students were conducted in laboratory settings, and all of the experiments that involved professionals were conducted in field settings, the just-noted results also can be interpreted in terms of research settings: More specifically, the magnitude of the attractiveness bias was similar for experiments in laboratory and field settings.

Sex of target. The mean effect size estimate for studies dealing with male targets (d = .40) did not differ from the mean effect size estimate for studies dealing with female targets (d = .32), Q = 2.80, p > .05. These results afford an unequivocal answer to Research Question 2: Effect sizes do not differ between male and female targets.

Publication period. The strength of the attractiveness bias differed as a function of the time interval in which studies were published, Q = 23.31, p Type of job-related outcome. The strength of the attractiveness effect varied as a function of type of job-related outcome, Q = 24.05, p Effectiveness of attractiveness manipulation. As seen in Table 3, there was no attractiveness effect for studies that did versus did not check for the effectiveness of their attractiveness manipulations, Q = 4.54, p > .05.

Discussion

Support for the Theoretical Perspectives

Two theoretical perspectives were used to explain relationships between attractiveness and various job-related outcomes. Implicit personality theory predicted that attractive individuals would be judged and treated more positively than less attractive individuals on job-related outcomes. The lack of fit model predicted that attractiveness would interact with sex of individuals and sex-type of job. Results of the present meta-analytic study provided support for the prediction of implicit personality theory but failed to support the prediction of the lack of fit model. Consistent with implicit personality theory, the results show clearly that attractive individuals fare better than their less attractive counterparts in terms of a variety of job-related outcomes. The mean weighted effect size estimate (d) for attractiveness was .37 for the 62 effect size estimates.

The results of our study failed to support the prediction derived from the lack of fit model (Heilman, 1983). The mean effect size estimates were all positive, regardless of the sex of job applicants or employees and the sex-type of the job. Thus, our results afford no support for the “beauty-is-beastly” perspective: Physical attractiveness is always an asset for individuals.

Effects of Job-Relevant Information

Interestingly, our meta-analysis failed to show that the attractiveness bias is stronger when decision makers have less job- relevant information about the target than when they have more such information. This finding is inconsistent with the above-described information usage models of Fiske and Neuberg (1990), and Brewer (1988). Both such models suggest that perceivers (e.g., decision makers) will be less influenced by job-irrelevant factors (i.e., physical attractiveness of job applicants or job incumbents) when they have other (i.e., individuating) information about the target.

It deserves adding that even when decision makers have job- relevant information about targets, they may still use physical attractiveness information in making decisions. Attesting to this is the fact that even many of the studies included in the present meta- analysis provided a large amount of job-relevant information about targets (e.g., relevant past work experience, relevant college major, interview transcripts, performance reviews), the same information had no effect on impressions formed about the targets.

As noted above, several researchers (e.g., Morrow, 1990; Stone et al., 1992) have argued that although physical attractiveness might not be the most important determinant of employment-related decisions, it might be a crucial deciding factor when decision makers are faced with the difficult task of either (a) selecting one applicant among several who possess similar qualifications for a job, or (b) differentially rewarding employees who have similar records of job performance. The results of the present meta- analysis provide considerable support for this argument; that is, the findings indicate that attractiveness can have a nontrivial, positive impact on individuals’ job-related outcomes, even when job- relevant information about them is available to decision makers.

In spite of this study’s failure to find a moderating effect of individuating information on the relationship between attractiveness and various job-related outcomes, we believe that this type of information can reduce the degree of reliance that decision makers place on attractiveness. Moreover, we suspect that our failure to find a moderating effect may have been attributable to two factors. First, our study may not have had enough statistical power to detect the effect. Second, the rather crude way in which the individuating information variable was operationalized in our study may have led to the failure to find such an effect.

Between- Versus Within-Subjects Designs

Our meta-analysis showed that the effect of attractiveness may be especially pronounced in research calling for evaluators to sequentially observe and evaluate several individuals who differ in terms of attractiveness (e.g., as is true of research using within- subjects designs). Under such conditions, differences in attractiveness among targets are likely to be more salient than when only one target is observed and evaluated (e.g., as is true of research using between-subjects designs). The same results are consistent with those of studies by Eagly et al. (1991) and Olian et al. (1988). However, similar to a note of caution advanced by Olian et al. (1988), we recognize that this study’s findings may not generalize to actual employment situations. The reason for this is that the studies considered by our meta-analysis involved research in which participants made judgments about hypothetical, as opposed to actual, job applicants or incumbents. Nevertheless, it deserves noting that the strategy followed in research using within-subjects designs more closely reflects what occurs in “real world” selection contexts than that followed in research using between-subjects designs (e.g., Olian et al., 1988). That is, organizational decision makers typically evaluate two or more job applicants or job incumbents within a relatively short time interval. Thus, we believe that the closer in time two or more applicants or incumbents are evaluated by raters (e.g., personnel interviewers), the greater will be the bias in ratings attributable to differences in attractiveness among them.

Type of Evaluator

Our findings also indicate that personnel professionals (e.g., recruiters, personnel consultants) are as susceptible as college students to bias their decisions about targets on the basis of target attractiveness. In view of this, personnel professionals should be made aware of the fact that the attractiveness of job applicants and job incumbents may bias employment-related decisions. In addition, decision-making procedures should be structured so as to avert or lessen the influence of this bias.

Attractiveness Effects Across Target Sex

The results of our meta-analysis show that attractiveness is just as important for male as for female targets with respect to various job-related outcomes. It merits adding that the same results are consistent with the findings of three previous meta-analyses (i.e., Eagly et al., 1991; Feingold, 1992; Langlois et al., 2000). All three failed to find sex-based differences in perceptions of attractive and less attractive targets across several evaluative dimensions. In contrast, Jackson et al. (1995) found stronger attractiveness effects for men than for women in perceptions of their intellectual competence, and Feingold (1992) found stronger attractiveness effects for women than for men in attributions concerning sexual warmth. Overall, these results indicate that sex differences in attractiveness effects might be domain specific.

Time Period of Attractiveness Research

As noted above, our meta-analysis showed differences in the magnitude of the attractiveness effect as a function of period during which studies were published: The strength of the same bias during the 1995-1999 period was smaller than the strength of the bias for the 1975-1979 and 1980-1984 periods (p Type of Outcome

Although the results of our meta-analysis indicate that the effect of attractiveness varies as a function of type of job- related outcomes, these same results should be interpreted with caution. The reason for this is that there were a relatively small number of effect sizes for some of the job-related outcomes (e.g., employment potential, choice as a business partner).

Magnitude of Effect Size Issues

Finally, the findings of our meta-analysis expand upon those of (a) Eagly et al. (1991) and Feingold (1992) who found small to moderate sized effects of attractiveness on a number of evaluative dimensions (e.g., social competence, intellectual competence, concern for others, sexual warmth), (b) Ritts et al. (1992) who found a small sized effect of student attractiveness on teachers’ judgmen s of them on such dimensions as intelligence and future academic potential, and (c) Jackson et al. (1995) who found moderate sized effects of attractiveness on intellectual competence. Corroborating the results of Langlois et al. (2000), our meta- anal

Israeli Woman Swallows Cockroach, Fork

It’s the bizarre, nightmarish stuff of a child’s nursery rhyme: An Israeli woman swallowed a cockroach and right after it, down went a fork she used to try to fish the critter out of her throat.

A winged cockroach jumped into the woman’s mouth as she was cleaning her home in a village in northern Israel this week. And as the story goes, the 32-year-old woman tried to scoop the bug out with a fork but swallowed it as well.

“It’s a bit of a strange story,” said Dr. Nikola Adid, who operated on the woman on Tuesday to remove the fork from her stomach – the bug was already digested. “This is the first time I’ve ever encountered anything like this. None of my medical colleagues in this country have heard of anything similar either.”

An X-ray showed the fork, lodged sideways in her stomach.

Adid, a surgeon at the Poria Hospital in Tiberias, on the Sea of Galilee, removed the fork with laparoscopic surgery, a minimally invasive procedure performed through a tiny incision a patient’s abdomen.

The woman is recovering well, Adid said – better off than the old woman of the children’s rhyme:

“There was an old woman who swallowed a fly. I don’t know why she swallowed a fly. Perhaps she’ll die.”

A guide to curricular integration

Lessons can become more meaningful to students and save teachers valuable time when subjects are integrated properly, not superficially.

A school’s curriculum can appear unrelated, fragmented, or somewhat disjointed if not done with an end in mind. This fragmentation or disjointedness often affects students and their views of the experiences being given them in school (Beane 1991). Various curriculum-integration techniques, however, can be used to help make the big picture more understandable to students; and these have the added benefit of allowing teachers to focus better on teaching and student learning.

What Does Integrating Curriculum Mean?

Jacqueline Anglin’s (1999, 3) insight that “integrating curriculum correctly requires more than combining two subjects, or turn teaching” was right on track. The notion of integrating a curriculum is more than connecting pieces so that students can see the bigger design. In effective curriculum-integration models, knowledge is meaningfully related and connects in such a way that it is relevant to other areas of learning as well as real life. Of course, sometimes integration is not the best approach to teaching. Integration just for the sake of integration even can interfere with learning if constructed activities are not meaningful.

To integrate a curriculum is to combine subjects to meet objectives across the curriculum, not just objectives pertaining to one subject. For example, while studying Indians in social studies, reading could be integrated by including both fiction and nonfiction stories about Indians. Viewing and recreating Indian art could meet art objectives. Charting the locations of various tribes and calculating mileage between different tribes or distances tribes traveled could meet geography and math objectives.

An interdisciplinary or integrated curriculum allows students to make connections among various subjects, while also helping to solve the teacher’s dilemma of having so much to accomplish in a limited time. An integrated curriculum, by nature, ties an individual subject to the circle of educational experiences and learning, thus reducing the need for teachers formally to make every lesson a connection to life. The saved time allows teachers more opportunities to accomplish tasks on their ever-growing “required” lists.

Models of Integration

The current trend to implement an integrated curriculum is not a new idea. Vars (1991) traced the evolving concept of the core curriculum back to Herbert Spencer’s writings in the 1800s. By the late 1930s and early ’40s, the term “core curriculum” had become part of the literature in various state and national curriculum- reform efforts, most significantly the progressive education movement. In 1942, the concept of core and integrated curriculum was being tested in the famous Eight-Year Study of the Progressive Education Association. By the late 1980s, more than 80 normative or comparative studies had been conducted on the effectiveness of integration (National Association for Core Curriculum 1984). These studies found that programs using integration or an interdisciplinary curriculum almost always produced equivalent or even better scores on standardized achievement tests than those where students were taught through the traditional discipline- oriented format.

Today, these are some of the more popular curricular models that have evolved and currently are being used:

* The connected integration model does not integrate various subjects, but focuses on integrating skills or concepts within a subject. For example, a science teacher can relate a geology unit to an astronomy unit by emphasizing that each has an evolutionary nature (Fogarty 1991).

* The nested integration model focuses on natural combinations. For instance, a lesson on the circulatory system can integrate the concept of systems as well as demonstrate “cause and effect” on specific understandings of the circulatory system (Fogarty 1991).

* In the sequenced model, units are taught separately, but are designed to provide a broad framework for related concepts. For example, while reading A Taste of Blackberries (Smith 1992), a parallel lesson on bees could be taught in science.

* The shared model looks for overlapping concepts and involves coordinated planning between two teachers of different subjects. A literature teacher and a history teacher, for example, may team up to teach an historical perspective of the concepts of segregation and desegregation by reading Roll of Thunder, Hear My Cry (Taylor 2001).

* The webbed model generally uses a theme to connect all subject areas. If the theme were Christmas, for instance, literature classes might read A Christmas Carol (Dickens 2001). In math, students could calculate the costs of their Christmas lists. Social studies classes might research Christmas in other countries. In language arts, students could write about their favorite Christmas. In science, lessons could focus on weather or flying machines.

* The threaded model “threads” thinking, social, or study skills to connect learning across the curriculum. For example, sequencing is a skill taught primarily in reading, but can be threaded into the other subjects. In social studies, students could put in order the voyages of Christopher Columbus and the events leading up to them. In math, patterns of numbers could be explored. In science, the steps of succession of a dying or dead forest could be explored. And in health, students could study the steps in digesting food.

* The integrated model blends the four major disciplines by finding concepts or skills that overlap. The most popular example of this model is the whole-language approach that is now being implemented in many elementary schools. This method blends the skills of reading, writing, speaking, and listening using literature as a theme.

* The immersed model advocates that integration take place within the learner with little or no outside help. For example, a student who has a love for horses reads about horses, writes about them, draws pictures of them, and longs to learn more about them and possibly become a horse trainer or veterinarian.

* The networked model allows for exploration, experimentation, and participation. A student’s fascination with the solar system and space travel, for instance, directs his or her reading choices or television viewing. Teachers or family members cognizant of this child’s interest encourage him or her by allowing the student to go to space camp.

Robin Fogarty (1991, 61-64) made a wonderful analogy of these models by comparing them to visual devices:

The connected model of the integrated curriculum is the view through an opera glass, providing a close-up of the details, subtleties, and inter-connections within each subject area.? . . . The nested model views the curriculum through three-dimensional glasses, targeting multiple dimensions of a lesson. . . . The sequenced model views the curriculum through eyeglasses: the lenses are separate but connected by a common frame. . . . The shared model views the curriculum through binoculars, bringing two distinct disciplines together into a single focused image. . . . The webbed model views the curriculum through a telescope, capturing an entire constellation of disciplines at once. . . . The threaded model views the curriculum through a big magnifying glass: the ‘big ideas’ are enlarged throughout all content with a metacurricular approach. . . . The integrated model views the curriculum through a kaleidoscope: interdisciplinary topics are rearranged around overlapping concepts and emergent patterns and designs. . . . The immersed model views the curriculum through a microscope. It filters all content through the lens of interest and expertise. . . . The networked model views the curriculum through a prism, creating multiple dimensions and directions of focus.

Planning for Curriculum Integration

Integrating the curriculum of a school takes planning. Jacobs (1991) developed a four-phase plan that can be accomplished in three years:

* Phase I (six months to one year) is research. Internal research is conducted to plot the units of study taught on a monthly basis- to find out when students are studying certain subject matter, to reduce repetition of material from year to year and to identify units of study that lend themselves to an interdisciplinary approach. Staff members conduct external research by attending conferences, making onsite visits, or arranging in-service activities.

* Phase II (two to four months) is development of a proposal. Potential areas for interdisciplinary units are assessed, and an existing unit of study is upgraded to include integration of various subjects. On completion of the proposal and its review at higher levels, classroom implementation of a pilot program may follow.

* Phase III (two to six weeks) is implementation of the pilot program. This phase includes assessment by the teaching staff involved in the pilot. The program is monitored and evaluated, and feedback is given.

* Phase IV (third year of plan) is adoption of the program based on the feedback and evaluation from the pilot phase. Adding the program to the existing curriculum is often constrained by time; replacing the curriculum with the new one is much more common. For example, English, social studies, and art are replaced by humanities.

Planning for curriculum integration on a daily basis for individual cla\ssrooms is equally important as planning integration at the system level. To assist teachers in curriculum integration, Palmer (1991, 58) suggested the use of a “planning wheel”-a device that “allows for teachers to focus on a specific subject area while identifying appropriate connections with other content.” Palmer’s steps for implementing the planning wheel follow:

* Step 1: Identify common goals, objectives, themes, and skills among the different subjects.

* Step 2: Develop a sample planning wheel to illustrate the kinds of connections to be made. The focus of the unit, such as nutrition (in a health class), is listed in the middle of the wheel. On the outside of the wheel are other subjects, and under each are listed activities related to the focus-for example, under math, calculating calories for dietary planning; under language arts, writing about foods from other cultures; under music, singing songs about food; under physical education, determining correct amounts of exercise to burn calories.

* Step 3: Planners of curriculum use the wheel as an aid to organizing and planning new curricula.

* Step 4: In-service activities are held to train teachers on how to implement the proposed integrated curriculum.

Will Integrating Make a Difference?

Integration may not work, especially when curriculum integration is implemented merely for the sake of integration. In fact, integration can be counterproductive when activities originally intended to combine subject matter and objectives in a meaningful way lack educational value, or meet objectives in one subject while failing to satisfy objective requirements in the other subjects (Brophy and Alleman 1991). Activities such as alphabetizing state capitals or counting states in a geographical region are not valuable lessons in the area of social studies. These activities would be done just for the sake of integration and are more or less busywork (Alleman and Brophy 1993).

Not only are some activities meaningless, but they also may be time-consuming or costly-for example, carving pumpkins to look like U.S. presidents. Too often, teachers integrate superficially with activities devoid of curricular value. One teacher attempted to integrate math and social studies by having students fill a matrix with the actual numbers of the constitutional amendments, thinking this represented a math objective because the students were “using” numbers (Alleman and Brophy 1993).

A Design for Success

To make integration meaningful and successful in a classroom, activities must be assessed by their educational value and meet curricular objectives in two or more subject areas. When implemented properly, not superficially, integration can be a more meaningful approach to learning for students, as well as a time-saver for teachers.

Brophy (Alleman and Brophy 1993) suggested testing each proposed activity with the following questions before integrating it across the curriculum:

* Does the activity have a significant educational goal as its primary focus?

* Would this activity be desirable even if it did not feature across-subjects integration?

* Would an outsider clearly recognize the activity as relating to the subject?

* Does the activity allow students to develop meaningfully or apply authentically important content?

* Does it involve authentic application of the skill from other disciplines?

* If the activity is structured properly, will students be able to understand and explain its educational purposes?

* If students engage in the activity with those purposes in mind, will they be likely to accomplish the purposes as a result?

Some of the most famous and successful examples of curriculum integration come from Wigginton’s Foxfire Experience (1985). In attempting to reach a group of students who were basically failing in school, Wigginton searched for a way to teach that would motivate students and give them a meaningful educational experience. He coordinated students to develop the Foxfire publication, letting them write, edit, and even negotiate book contracts. He obviously achieved the motivation he desired, but time constraints and particular curricular requirements were constant hindrances.

Wigginton (1991, 49) wrote:

Keeping the curriculum requirements in mind, I initiated a unit informal letter writing. If I could just figure out ways of this sort to make the curriculum work for the magazine instead of against it, I could kill two birds with one stone. I could fulfill the state requirements and at the same time give those requirements an added dimension of reality for the students that would make their internalization and mastery far more likely. . . . Classes had come together as one. Teaching was beginning to make sense.

If integrated teaching can help a school’s curriculum “make sense” to the teacher, then consider how much more sense it can make for the student if it lives up to the ideals that form a basis for meaningful educational experiences.

References

Alleman, J., and J. Brophy. 1993. Is curriculum integration a boon or a threat to social studies? Elementary education. Social Education 57 (6): 287-91.

Anglin, J. M. 1999. Develop your own philosophy. New Teacher Advocate 7(1): 3.

Beane, J. 1991. The middle school: The natural home of integrated curriculum. Educational Leadership 49(2): 9-13.

Brophy, J., and J. Alleman. 1991. A caveat: Curriculum integration isn’t always a good idea. Educational Leadership 49(2): 66.

Dickens, C. 2001. A Christmas carol. Foster City, Calif.: Hungry Minds.

Fogarty, R. 1991. Ten ways to integrate curriculum. Educational Leadership 49(2): 61-65.

Jacobs, H. H. 1991. Planning for curriculum integration. Educational Leadership 49(2): 27-28.

National Association for Core Curriculum. 1984. Bibliography of research on the effectiveness of block-Time, core, and interdisciplinary team teaching programs. Kent, Ohio: NACC.

Palmer, J. M. 1991. Planning wheels turn curriculum around. Educational Leadership 49(2): 57-60.

Smith, D. B. 1992. A taste of blackberries. New York: HarperTrophy.

Taylor, M. D. 2001. Roll of thunder, hear my cry. New York: Phyllis Fogelman Books.

Vars, G. F. 1991. Integrated curriculum in historical perspective. Educational Leadership 49(2): 14-15.

Wigginton, E. 1985. Sometimes a shining moment: The Foxfire experience. Garden City, N.Y.: Anchor Press/Doubleday.

Robert C. Morris is Professor of Curriculum Studies, State University of West Georgia in Carrollton. He is Counselor for the Omicron Omega Chapter of Kappa Delta Pi. His current research interests relate to leadership activities for curricular and instructional change.

Copyright Kappa Delta Pi Summer 2003

Record Six Houston Pitchers No-Hit Yanks

The Houston Astros patched together a bizarre performance, with a record six pitchers combining on the first no-hitter against the New York Yankees in 45 years.

Closer Billy Wagner stepped on first base for the final out of Wednesday night’s 8-0 win, and pumped his fist. While some Astros ran from the dugout to celebrate, others straggled onto the field.

“What’s amazing is that most of our team didn’t know about it,” Wagner said.

Left fielder Lance Berkman said second baseman Jeff Kent acted puzzled by the hearty high-fives.

“He was like, ‘What’s going on?'” Berkman said. “I said, ‘We no-hit them.'”

The Astros appeared to be in trouble when ace Roy Oswalt was forced to leave in the second inning because of a strained right groin.

But relievers Pete Munro, Kirk Saarloos, Brad Lidge, Octavio Dotel and Wagner completed the odd gem.

It was the most pitchers ever to combine on a no-hitter in the majors – four had twice done the trick.

The Yankees had gone 6,980 games – the longest streak in big league history – without being no-hit, since Hoyt Wilhelm’s 1-0 victory for Baltimore on Sept. 20, 1958.

The last time New York had been held hitless at Yankee Stadium was on Aug. 25, 1952, by Detroit’s Virgil Trucks.

“This is one of the worst games I’ve ever been involved in,” Yankees manager Joe Torre said. “It was a total, inexcusable performance.”

“I can’t find a reason for what happened today,” he said. “The whole game stunk.”

In other interleague games, it was: Toronto 8, Pittsburgh 5; Boston 13, St. Louis 1; Chicago Cubs 7, Baltimore 6; Cleveland 3, San Diego 2; Los Angeles 3, Detroit 1; Cincinnati 7, Tampa Bay 6; Minnesota 7, Colorado 4; New York Mets 8, Texas 2; Arizona 4, Kansas City 3; San Francisco 11, Chicago White Sox 4; Montreal 3, Seattle 1; Anaheim 5, Philadelphia 3; and Atlanta 11, Oakland 6.

In the lone NL game, Florida beat Milwaukee 6-5.

The Astros came into Yankee Stadium this week eager to soak up all the history of the ballpark.

Wagner talked about being in “awe” of the tradition, Lidge studied the black-and-white photos of famous Yankees outside the New York clubhouse and many other players toured Monument Park.

After Oswalt was injured, Munro pitched 2 2-3 innings, Saarloos 1 1-3 innings and Lidge (4-0) went two innings. Dotel worked the eighth, striking out four in an inning for only the 44th time in big league history.

Dotel and Wagner combined to strike out eight straight hitters before Hideki Matsui grounded out to end it.

“First appearance for most of us in Yankee Stadium,” Wagner said. “What better place could there be?”

Yankees’ fans stood and applauded as the Astros closed it out.

“One guy usually goes out there and does it,” Astros manager Jimy Williams said. “Maybe two, but not six.”

Vida Blue, Glenn Abbott, Paul Lindblad and Rollie Fingers combined for a no-hitter for Oakland against California on Sept. 28, 1975.

Bob Milacki, Mike Flanagan, Mark Williamson and Gregg Olson combined for a no-hitter for Baltimore at Oakland on July 13, 1991.

The closest New York came to a hit was in the fifth when Alfonso Soriano hit a shallow fly ball that Berkman caught with a tumble.

“It wasn’t that close,” Berkman said. “It probably looked more spectacular than it really was.”

This was the second no-hitter in the majors this season. Kevin Millwood pitched one for Philadelphia on April 27 against San Francisco.

And it came on the 65th anniversary of Johnny Vander Meer’s first no-hitter. The only pitcher to throw consecutive no-hitters, he started that streak on June 11, 1938, for Cincinnati against the Boston Braves.

Overall, it was the third no-hitter in a game between AL and NL teams, and all of them have been at Yankee Stadium. The other two were perfect games – Don Larsen did it against the Brooklyn Dodgers in the 1956 World Series and David Cone did it against Montreal on July 18, 1999.

Back on July 12, 1990, Melido Perez of the Chicago White Sox held New York hitless in a game shortened to six innings by rain at Yankee Stadium. Because the game did not go nine innings, Perez is not officially credited with a no-hitter.

Blue Jays 8, Pirates 5

Roy Halladay won his ninth straight start, breaking Roger Clemens’ team record, and Carlos Delgado hit his AL-leading 21st homer for Toronto at SkyDome.

Halladay (9-2) allowed one run on eight hits in eight innings. He struck out nine and walked one. The 26-year-old right-hander hasn’t lost since April 15 against the Yankees – a span of 12 starts.

Pittsburgh’s Aramis Ramirez extended his career-high hitting streak to 22 games.

Braves 11, Athletics 6

Javy Lopez hit one of five Atlanta homers off Ted Lilly and drove in four runs at Oakland.

Rafael Furcal, Marcus Giles, Andruw Jones and Vinny Castilla also connected for the Braves, who lead the majors with 104 homers.

Dodgers 3, Tigers 1

Kevin Brown earned his NL-leading ninth victory and Fred McGriff’s go-ahead single moved him into a tie with Joe DiMaggio on the career RBIs list as the Dodgers won in Detroit.

McGriff knocked in Brian Jordan in the fourth inning to give Los Angeles a 2-1 lead. It was McGriff’s 1,537th RBI, tying DiMaggio for 36th place.

Brown (9-1) broke a tie with St. Louis’ Woody Williams and Colorado’s Shawn Chacon for the most wins in the NL.

Red Sox 13, Cardinals 1

Pedro Martinez pitched three solid innings in his return from the disabled list for Boston, which had a season-high 19 hits.

Martinez left to a standing ovation at Fenway Park after throwing 47 pitches as the Red Sox eased him back into action. He went on the disabled list May 25 with an inflamed tendon and strained muscle high on his right side.

The Red Sox ace struck out three and allowed two hits and no walks.

Indians 3, Padres 2

C.C. Sabathia took a shutout into the eighth inning as Cleveland won for the 10th time in its last 12 home games.

Sabathia blanked the visiting Padres on nine hits for 7 2-3 innings before giving up Brian Buchanan’s two-out, two-run homer.

Reds 7, Devil Rays 6

Kelly Stinnett hit a grand slam and Aaron Boone snapped a ninth-inning tie with an RBI single as Cincinnati handed Tampa Bay its season-high sixth straight loss.

Jose Guillen started the winning rally in the top of the ninth with a one-out single off Jesus Colome. He moved to second on a wild pitch and scored on Boone’s hit after Austin Kearns was walked intentionally.

Mets 8, Rangers 2

Cliff Floyd homered and drove in five runs for New York and Jae Seo pitched seven effective innings at Texas.

Floyd, who finished 3-for-4, put the Mets ahead to stay with a two-run single in the first. He hit his 13th homer in the seventh.

Giants 11, White Sox 4

Rookie Jesse Foppert pitched one-hit ball into the eighth inning, and Pedro Feliz hit a grand slam as San Francisco won at Chicago.

Barry Bonds added a two-run homer, the 630th of his career.

Twins 7, Rockies 4

Kyle Lohse allowed one run over six innings and Corey Koskie had four of Minnesota’s 15 hits at the Metrodome.

Colorado’s Aaron Cook fell to 0-6 on the road.

Diamondbacks 4, Royals 3

Arizona rookie Andrew Good won his third straight start, allowing just two unearned runs in six innings at Kansas City.

Good kept his composure despite the Diamondbacks making three errors in the first three innings.

Cubs 7, Orioles 6

Chicago won at Camden Yards without Sammy Sosa, who began a seven-game suspension for using a corked bat.

Matt Clement has won two straight starts after going six in a row without a victory.

Melvin Mora extended his career-best hitting streak to 22 games for Baltimore.

Expos 3, Mariners 1

Livan Hernandez pitched seven strong innings as Montreal won its sixth straight game.

Seattle lost its second straight home game after returning from an 11-1 road trip.

Angels 5, Phillies 3

Bengie Molina hit a two-run, go-ahead single in the sixth inning as Anaheim beat visiting Philadelphia.

Marlins 6, Brewers 5

Rookie Dontrelle Willis (5-1) won his fourth straight start and Luis Castillo and Derrek Lee each homered for Florida.

Down 6-4, Milwaukee loaded the bases with one out in the bottom of the ninth against reliever Braden Looper, but was able to score just one run.

Astronomers Puzzled over Comet LINEAR’s Missing Pieces

Hubble Space Telescope — Astronomers analyzing debris from a comet that broke apart last summer spied pieces as small as smoke-sized particles and as large as football-field-sized fragments. But it’s the material they didn’t see that has aroused their curiosity.

Tracking the doomed comet, named C/1999 S4 (LINEAR), NASA’s Hubble Space Telescope’s Wide Field and Planetary Camera 2 found tiny particles that made up the 62,000-mile-long (100,000-kilometer-long) dust tail and 16 large fragments, some as wide as 330 feet (100 meters).

Hubble detected the small particles in the dust tail because, together, they occupy a large surface area, which makes them stand out in reflected sunlight. However, the estimated mass of the observed debris doesn’t match up to the comet’s bulk before it cracked up.

“The mass of the original, intact nucleus is estimated to be about 660 billion pounds (300 billion kilograms), according to some ground-based observers who were measuring its gas output,” says Hal Weaver, an astronomer at the Johns Hopkins University in Baltimore, Md., who studied the comet with the Hubble telescope, the European Southern Observatory’s Very Large Telescope (VLT) in Chile, and other ground-based telescopes.

“However, the total mass in the largest fragments measured by the Hubble telescope and the VLT is only about 6.6 billion pounds (3 billion kilograms), and the dust tail has an even smaller mass of about 0.7 billion pounds (0.3 billion kilograms). In other words, the total mass measured following the breakup is about 100 times less than the estimated total mass prior to the breakup.” Weaver’s results will be published in a special May 18 issue of Science devoted to the transitory comet.

So where is the rest of the comet’s fractured nucleus? Perhaps, suggest Weaver and other investigators, most of the comet’s bulk after the breakup was contained in pieces between about 0.1 inches (2.5 millimeters) and 160 feet (50 meters) across.

These pebble-sized to house-sized fragments cannot be seen by visible-light telescopes because they do not have enough surface area to make them stand out in reflected sunlight.

Comets are leftover debris from the creation of the solar system 4.6 million years ago. They’re made up of a combination of solid rock and frozen gases held together by gravity.

If the midsized cometary fragments exist, then the fundamental building blocks that comprised LINEAR’s nucleus may be somewhat smaller than what current “rubble pile” theories of the solar system’s formation suggest.

These theories generally favor football-field-sized fragments, like the ones observed by the VLT and the Hubble telescope. The analysis of LINEAR’s fragments indicates that the “rubble” comprising cometary nuclei may be somewhat smaller than previously thought.

Another puzzling question is why the comet broke apart between June and July of last year as it made its closest approach to the Sun.

“We still don’t know what triggered the comet’s demise,” Weaver says. “But we do know that carbon monoxide (CO) ice probably did not contribute to the breakup.”

Hubble’s Space Telescope Imaging Spectrograph detected low levels of this volatile material, about 50 times less than was observed in comets Hale-Bopp and Hyakutake. Carbon monoxide ice sublimates [changes directly from a solid to a vapor] vigorously, even at the cold temperatures in a comet’s interior. This activity could lead to a buildup of pressure within the core that might cause the nucleus to fragment.

“The scarcity of carbon monoxide in LINEAR’s nucleus is problematic for any theory that attempts to invoke it as the trigger for the comet’s demise,” Weaver says.

An armada of observatories, including the Hubble telescope, watched the dazzling end to the transitory comet. Hubble was the first observatory to witness LINEAR breaking apart, spying in early July a small piece of the nucleus flowing down the doomed comet’s tail. LINEAR completely disintegrated in late July as it made its closest approach to the Sun, at a cozy 71 million miles.

Again, the Hubble telescope tracked the comet, finding at least 16 fragments that resembled “mini-comets” with tails. Now LINEAR is little more than a trail of debris orbiting the Sun. The comet is believed to have wandered into the inner solar system from its home in the Oort Cloud, a reservoir of space debris on the outskirts of the solar system.

“We were witnessing a rare view of a comet falling to pieces,” Weaver says. “These observations are important because, by watching comet LINEAR unravel, we are essentially seeing its formation in reverse. The nucleus was put together 4.6 billion years ago when the Earth and other planets were forming, so by watching the breakup we are looking backwards in time and learning about conditions during the birth of the solar system.”

Weaver notes, however, that astronomers may have witnessed an “oddball” comet break apart.

“I’ve never seen anything like this,” he says. “I know of no other example of a comet falling to pieces like this. Comet Shoemaker-Levy 9 fell apart, but tidal forces from Jupiter caused that disintegration. LINEAR didn’t come close to any other large object. Comet Tabur (C/1996 Q1) also seemed to vanish without a trace, but it already was the fragment of another comet nucleus [C/1988 A1 (Liller)]. Some investigators concluded that Tabur did not even break up but rather, became ‘invisible’ only because the icy area on its surface was no longer in sunlight, and its activity shut down as a result.”

Comet LINEAR was named for the observatory that first spotted it, the Lincoln Near Earth Asteroid Research (LINEAR) program.

Release Date: 2:00PM (EDT) May 17, 2001

Release Number: STScI-2001-14

On the Net:

Hubble Space Telescope

NASA

More science, space, and technology from RedNova

Abused Kids More Likely to Turn to Crime

By JONATHAN D. SALANT

WASHINGTON (AP) — Children who are abused or neglected are far more likely to become criminals as adults, according to a study released Monday by an organization of police chiefs, prosecutors and crime victims.

The report by Fight Crime: Invest in Kids recommends more money for pre-kindergarten programs and parenting classes, saying the cost will be offset later when children who might have been burdens on society grow up to be upstanding citizens.

“Children who survive abuse and neglect can be significantly injured,” said one of the report’s authors, Dr. Randell Alexander, director of the Center for Child Abuse at the Morehouse School of Medicine in Atlanta. “Many go on to hurt others. If you are born into a world of violence, you wire yourself for violence, not for peace.”

Using various federal data and academic and advocacy group studies, researchers said child abuse and neglect is vastly underreported. The 900,000 cases reported annually by the Health and Human Services Department may be only one-third of the actual total, the report said.

The report cited a study published in 2000 by Dr. Cathy Spatz Widom, a professor of criminal justice and psychology at the State University of New York at Albany, that found individuals who had been abused or neglected as youngsters were 29 percent more likely to become violent criminals than other children.

Using that estimate, researchers said 36,000 of the 900,000 children cited in the HHS report will become violent criminals when they reach adulthood, including 250 who will become murderers.

The report’s authors include four local prosecutors and two sheriffs. They said the findings illustrate the need for more federal funds for pre-kindergarten programs and parenting classes for families considered high-risk for child abuse, primarily those on welfare or headed by high school dropouts.

The 1996 welfare overhaul bill earmarked $2.8 billion for the states under a social services block grant, but congressional Republicans cut funding to $1.7 billion in the current budget year.

David Landefeld, the Republican district attorney for Fairfield County, Ohio, said crime connected to child abuse costs Americans $50 billion a year – 50 times the amount of money cut from the social services block grant.

HHS officials said it was up to Congress to decide whether to provide the money.

In Elmira, N.Y., a parenting program for single, poor mothers reduced incidents of child abuse or neglect to one-fifth of what they had been. In Chicago, a combination of parenting classes and pre-kindergarten cut cases of abuse and neglect in half, according to the report.

“It is possible to prevent child abuse and neglect instead of waiting for the next horror story to occur,” said Brooklyn, N.Y., District Attorney Charles Hynes.

Brendina Tobias of Newport News, Va., is a social worker whose son was killed in New York in 1993 while walking to a restaurant to get food for his elderly grandmother. The murderers had been neglected as children and learned to take whatever they wanted to survive, Tobias said.

“Abuse and neglect can be prevented,” Tobias said. “Maybe my son would still be alive.”

—–

On the Net:

Fight Crime: Invest in Kids

Widom study

More science, space, and technology from RedNova

Copyright © 2003 The Associated Press. All rights reserved. The information contained in the AP News report may not be published, broadcast, rewritten or redistributed without the prior written authority of The Associated Press.

Eye Shingles No Laughing Matter

Eye Shingles No Laughing Matter

Source: HealthScoutNews

David Letterman knows his eye condition is no joke.

So, too, do many of his fans, now that the painful ailment called eye shingles has temporarily knocked the “Late Show” host off the air. That alone is telling, because Letterman is famous for almost never calling in sick.

Symptoms of eye shingles include a blistering rash, painful inflammation, fever and sluggishness. Or as Letterman, his right eye visibly puffy, described the condition on the show after symptoms began a few weeks ago: “I look like somebody gave me a beating. It’s either an irritation, inflammation or infection … For the love of God, does it hurt!”

It turns out Letterman, who’s not scheduled to return to work until next week at the earliest, can blame chickenpox for his troubles.

The virus causing chickenpox, varicella zoster, can remain dormant in the body for years, even decades, before reactivating in nerve cells and causing shingles. Stress, medications, illness or aging can trigger shingles, formally known as herpes zoster.

Shingles can lead to serious eye damage and even loss of vision, especially if not treated quickly and properly, says a new Mayo Clinic study. Treating eye shingles with antiviral drugs, Mayo researchers say, can significantly reduce chances of serious consequences that could require surgery or even cause legal blindness in the affected eye.

“Time is of the essence with diagnosis and treatment,” says study author Dr. Keith Baratz. “I tell you, if there’s any suggestion of shingles, you treat hard and treat early.”

The Mayo study, reported in the March issue of the Archives of Ophthalmology, tracked 323 cases of eye shingles in Minnesota between 1976 and 1998. Of these patients, 202 received oral antiviral medications, while 121 went untreated.

Within five years of getting eye shingles, almost 9 percent of those untreated suffered serious eye conditions, compared with about 2 percent of those who received antiviral drugs.

The Mayo researchers defined these serious conditions as vision loss and damage that requires surgery to repair: scarring of the eyelids, which can prevent proper closing of the eyelids, and trichiasis, in which the eyelids and eyelashes turn inward.

Among those who received the treatment, even one day made a big difference.

Researchers found those afflicted with glaucoma, eye inflammation, swelling of the cornea and scarring of the cornea, which can cause blindness, received treatment an average of 4.8 days after getting eye shingles. Those who had none of these conditions got treatment in an average of 3.8 days, the study says.

“These medications are doing more than just making the patient feel better,” says Baratz, a Mayo ophthalmologist. “They’re reducing the risk of something very serious happening down the road.”

And the risk doesn’t necessarily go away when the symptoms do.

“Inflammation effects in the eye could linger literally for decades, so patients who run into problems don’t know they’re going to run into problems when it happens,” Baratz says.

Eye shingles patients normally feel pretty sick, he adds. But for the first day or two, symptoms can be subtle ones such as a red, inflamed eye and minor rash, he says, so shingles can be tough to diagnose immediately.

Of the 121 untreated patients in his study, six had loss of vision, compared with one among the patients who received treatment. All vision loss occurred as a result of scarring of the cornea, the study found.

The study’s findings underscore the importance of early diagnosis and treatment, says Dr. Richard L. Abbott, a professor of ophthalmology at the University of California, San Francisco.

“Later, we can treat a lot of the problems, but certainly if you can minimize them or eliminate them by early treatment, that is extremely important,” Abbott says. “Patients who come down with shingles or symptoms of this need to seek treatment as soon as possible.”

Like the Mayo researchers, Abbott stressed that even after symptoms disappear, eye shingles can still cause serious conditions, including glaucoma and even loss of vision.

An estimated 600,000 to 1 million Americans are diagnosed with shingles annually, the National Institutes of Health reports. Of those who get the disease, about 20 percent will suffer eye shingles, which occurs only in one eye, according to Mayo.

As for Letterman, the prognosis looks good, according to a statement by the physician treating him, Dr. Louis J. Aronne of New York Presbyterian Hospital.

“Dave’s condition continues to improve, and his overall health is excellent,” Aronne said, “but a complete recovery will require some additional time.”

Aronne did not return a reporter’s phone calls. A spokesman for Letterman, Thomas M. Keaney, would not comment on the shingles or the treatment but did issue a statement saying the talk show host, who is still recovering, would not return to the show this week.

More information

For more on eye shingles, visit The Shingles Center at Massachusetts General Hospital. To read more about shingles and its treatment, check the Mayo Clinic.

—-

More news from the world’s of science, space, and technology from RedNova

US Air Force plans nuclear drones

Unmanned aircraft have proven their worth in recent conflicts. But is a nuclear version a step too far?

NEW SCIENTIST — The US Air Force is examining the feasibility of a nuclear-powered version of an unmanned aircraft. The USAF hopes that such a vehicle will be able to “loiter” in the air for months without refuelling, striking at will when a target comes into its sights.

But the idea is bound to raise serious concerns about the wisdom of flying radioactive material in a combat aircraft. If shot down, for instance, would an anti-aircraft gunner in effect be detonating a dirty bomb? It raises political questions, too.

To have Unmanned Aerial Vehicles (UAVs) almost constantly flying over a region would amount to a new form of military intimidation, especially if they were armed, says Ian Bellamy, an arms control expert at Lancaster University in Britain.

But right now, there seems no stopping the proliferation of UAVs, fuelled by their runaway success in the Kosovo and Afghanistan conflicts. The big attraction of UAVs is that they don’t put pilots’ lives at risk, and they are now the norm for many reconnaissance and even attack missions.

The endurance of a future nuclear-powered UAV would offer military planners an option they might find hard to turn down. Last week, the Pentagon allocated $1 billion of its 2004 budget for further development of both armed and unarmed UAVs.

The US Air Force Research Laboratory (AFRL) has funded at least two feasibility studies on nuclear-powered versions of the Northrop-Grumman Global Hawk UAV (pictured above). The latest study, revealed earlier this month at an aerospace technology conference in Albuquerque, New Mexico, concluded that a nuclear engine could extend the UAV’s flight time from hours to months.

But nuclear-powered planes are not a new idea. In the 1950s, both the US and the USSR tried to develop nuclear propulsion systems for piloted aircraft. The plans were eventually scrapped because it would have cost too muchto protect the crew from the on-board nuclear reactor, as well as making the aircraft too heavy.

The AFRL now has other ideas, though. Instead of a conventional fission reactor, it is focusing on a type of power generator called a quantum nucleonic reactor. This obtains energy by using X-rays to encourage particles in the nuclei of radioactive hafnium-178 to jump down several energy levels, liberating energy in the form of gamma rays. A nuclear UAV would generate thrust by using the energy of these gamma rays to produce a jet of heated air.

The military interest was triggered by research published in 1999 by Carl Collins and colleagues at the University of Texas at Dallas. They found that by shining X-rays onto certain types of hafnium they could get it to release 60 times as much energy as they put in (New Scientist, 3 July 1999, p 42).

The reaction works because a proportion of the hafnium nuclei are “isomers” in which some neutrons and protons sit in higher energy levels than normal. X-ray bombardment makes them release this energy and drop down to a more stable energy level.

So the AFRL has since been looking at ways in which quantum nucleonics could be used for propulsion. “Our directorate is being cautious about it. Right now they want to understand the physics,” says Christopher Hamilton at the Wright Patterson Air Force Base in Ohio, who conducted the latest nuclear UAV study.

The AFRL says the quantum nucleonic reactor is considered safer than a fission one because the reaction is very tightly controlled. “It’s radioactive, but as soon as you take away the X-ray power source its gamma ray production is reduced dramatically, so it’s not as dangerous [as when it’s active],” says Hamilton.

Paul Stares, an analyst with the US Institute of Peace in Washington DC, wonders what would happen if a nuclear UAV crashed. But Hamilton insists that although hafnium has a half-life of 31 years, which according to Britain’s National Radiological Protection Board is equivalent to the highly radioactive caesium-137, the structural composition of hafnium hinders the release of this radiation. “It’s probably something you would want to stay away from but it’s not going to kill you,” claims Hamilton.

New Scientist

More science, space, and technology from RedNova

Earthshine — A Marvel of Nature

By: Astrobiology News staff writer

NASA Astrobiology Institute — When the crescent moon is just a sliver each month, the phrase–‘old moon in the young moon’s arms’– poetically describes a marvel of nature.

When the crescent moon is just a sliver each month, the phrase–‘old moon in the young moon’s arms’– poetically describes a marvel of nature. This marvel shows the shadow of the Earth reflecting back the largely blue light from the Earth, known as earthshine.

As recently presented at the 199th national meeting of the American Astronomical Society in Washington, D.C., astronomers from the University of Arizona Steward Observatory and the Harvard-Smithsonian Center for Astrophysics have benchmarked earthshine. Their findings provide clues as to how best to recognize distant planets that may harbor elements needed for life.

Those elements–mainly, water and oxygen–show up distinctly when looking at “earthshine”, or Earth’s reflection from the moon. “As a result,” says Nick Woolf of Arizona, “it is possible to use the moon to integrate light from the Earth and to determine what the spectrum of the Earth would be like if it were seen from far away as a planet. We need this information to prepare to observe Earth-like planets around other stars.”

A simulated image, using the Steward Observatory 90-inch telescope at Kitt Peak, Ariz, shows how the moon “saw” the Earth at the time.

The Earth, as seen from a distant vantage point, has long captivated the imagination of planet finders. And in 1993, a team of researchers inspired by Carl Sagan, used an Earth fly-by of the Galileo spacecraft on its way to Jupiter to catch a glimpse of how the Earth might appear from afar. For astrobiologissts, Sagan’s results were surprising.

Pale Blue Dot?

Rather than seeing the Earth as an obvious candidate for life, the Galileo pictures gave surprisingly few clues of the biological potential of our own planet. What was learned however, was first to look close to home before concluding about the biological potential of far-off candidate planets. How did Galileo miss the obvious signs of life we would have expected to see?

One answer may lie in the fact that the spacecraft made its observations while still quite close to the Earth. “The spectrograph was designed to look at small areas of Jupiter, so the field of view of the spectrograph was quite small,” says Woolf.

“Also, since the surface brightness of Jupiter is far less than the Earth, the spectrograph detectors saturated except when the spectrograph was pointed at the darkest area of Earth – a cloud-free section of sea.”

The cloud-free sea is considered very dark relative to the dominance of bright clouds in a global picture of Earth. Thus it should come as no surprise that Galileo was successful in only imaging a relatively dark and lifeless planet, mainly because its design was not intended to look at Earth, but to probe Jupiter instead.

Indeed, getting a complete picture of Earth has become a priority for the recent Earthshine study. What are the universal signatures of life that could set apart a candidate planet as biologically active?

As it turns out, the planet’s color, or more precisely, its light spectrum, gives the most interesting insights. Or as Carl Sagan aptly described Earth-like planets, the scientific teams comb distant stars looking for a ‘tiny blue dot’–a planet having the basic elements of water, oxygen and perhaps vegetation.

A Complex Matter

The Earth as seen by the Moon shows just such a signature. “The visible ozone band is a weak band, which is also rather hard to measure,” says Woolf. “The oxygen A band is the strong band which best indicates the large amount of oxygen that has been produced by photosynthesis. Any planet with an oxygen band like this would have so much oxygen on it that no reasonable alternative explanation would exist to it having been formed by living processes.”

Of course even to the human eye, an Earth-like planet shows up as not just a blue dot. Central to its astrobiological potential are spectral bands in green light — those showing photosynthesis and vegetation.

“The vegetation feature in the infrared is caused by the scattering of light at plant cell walls and organelles,” says Woolf. “This scattering is eliminated at wavelengths shortward of ~7200A by the absorption of chlorophyll A. So the effect of vegetation being present shows as an escarpment in the spectrum. There is a minor return of scattering in the mid green, where the dye absorption is reduced, and this is what produces the vegetation green color. If our eyes had a slightly greater red sensitivity, we would talk about vegetation as being red rather than green!”

These results indicate that the Earth’s spectral pattern is complex, particularly when the short wavelength (ultraviolet) and long wavelength (infrared) ranges have to include the dynamic effects of how our own atmosphere distorts such lunar reflections. “The Kitt Peak observations were made over the wavelength range 5000-9400A,” says Woolf, with the higher wavelengths in red and the lower wavelengths in blue-green.

“The main concerns,” says Woolf, “are both for correcting for regular moonlight scattered over the image, and correcting for the slightly different color that the moon has when light is reflected straight back and when it is reflected at a substantial angle. For the first of these we made observations of the sky near the dark limb of the moon, and observed earthshine just inside the limb. For the second we used observations made in 1973 which were not fully adequate for our needs, but were the best available measures.”

“Peaking” Their Interest

Despite these challenging corrections to what a land-based telescope might image of Earth, the team has offered three tell-tale signatures for further planet finders to look for in future. Water and oxygen bands are strong and easier to detect, while sub-peaks for chlorophyll and ozone (charged oxygen, or O3-) are likely byproducts of vegetation that also have piqued their interest for further study.

“There is an interesting question whether a peak near 7500A is a universal feature of complex photosynthetic life forms on land, or whether it is unique to Earth’s biochemistry. There are some reasons for suspecting that it is universal, and relates to the absorption spectrum of seawater, and processes when vegetation migrated onto land.”

Thus, it came as a surprise to see such a sharp rise in the far-red picture of the Earth, because that portion of the spectrum is particularly tuned to land vegetation and not the roughly 83% of the sea reflection that made up the earthshine photos.

“An extraterrestrial observing Earth would have noticed that about 400 – 500 million years ago, vegetation took root on land,” says Woolf. “And since land vegetation requires different parts — roots and leaves, for example — it would indicate that life had taken hold strongly on our planet. Spectral features of oxygen and ozone would indicate photosynthesis by living organisms, further confirming the evidence. Any advanced intelligence that cared to inquire would know that life has been present on Earth for a very long time,” Woolf concluded.

What’s Next

For most planet finders, the real challenge is to identify faint planets in the glare of their much brighter parent stars. To overcome the distortion of how our own atmosphere may further obscure this detection, both large land-based telescopes and space missions will likely combine in the future to complete the picture.

“The giant telescopes planned for land would be somewhat like the giant single dish radio telescopes,” says Woolf, “but working at optical wavelengths, and employing adaptive optics to make very sharp images. There are a number of organizations around the world currently investigating the ways of making these telescopes, but the number of telescopes is likely to be small.”

Both NASA and the European Space Agency (ESA) propose space missions to look for Earth-like planets in the infrared. NASA is developing the Terrestrial Planet Finder (TPF) project, part of the Jet Propulsion Laboratory Navigator Program, and ESA is developing its DARWIN project. The European Southern Observatory is exploring the possibility of ground-based searches using the future Overwhelmingly Large Telescope (OWL) project.

Collaborations with Nick Woolf of the University of Arizona Steward Observatory included Wes Traub of the Harvard-Smithsonian Center for Astrophysics, Paul Smith, also at Arizona, and Ken Jucks, a colleague assisting Wes Traub.

On the Net:

Galileo Mission

Darwin Mission

NASA

NASA Astrobiology Institute

University of Arizona

Harvard-Smithsonian Center for Astrophysics

More science, space, and technology from RedNova