Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 11: Presenting Your Research

Writing a Research Report in American Psychological Association (APA) Style

Learning Objectives

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a  title page . The title is centred in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioural Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behaviour?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The  abstract  is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The  introduction  begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The  opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behaviour (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humourous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humour and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favourite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The  closing  of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question or hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behaviour during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centred on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Three ways of organizing an APA-style method. Long description available.

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on.

The  results section  is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Several journals now encourage the open sharing of raw data online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A third preliminary issue is the reliability of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items. A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The  discussion  is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end when you have made your final point (although you should avoid ending on a limitation).

The references section begins on a new page with the heading “References” centred at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An  appendix  is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centred at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

""

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g.,  Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different colour each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.

Long Descriptions

Figure 11.1 long description: Table showing three ways of organizing an APA-style method section.

In the simple method, there are two subheadings: “Participants” (which might begin “The participants were…”) and “Design and procedure” (which might begin “There were three conditions…”).

In the typical method, there are three subheadings: “Participants” (“The participants were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”).

In the complex method, there are four subheadings: “Participants” (“The participants were…”), “Materials” (“The stimuli were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”). [Return to Figure 11.1]

  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The compleat academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵

A type of research article which describes one or more new empirical studies conducted by the authors.

The page at the beginning of an APA-style research report containing the title of the article, the authors’ names, and their institutional affiliation.

A summary of a research study.

The third page of a manuscript containing the research question, the literature review, and comments about how to answer the research question.

An introduction to the research question and explanation for why this question is interesting.

A description of relevant previous research on the topic being discusses and an argument for why the research is worth addressing.

The end of the introduction, where the research question is reiterated and the method is commented upon.

The section of a research report where the method used to conduct the study is described.

The main results of the study, including the results from statistical analyses, are presented in a research article.

Section of a research report that summarizes the study's results and interprets them by referring back to the study's theoretical background.

Part of a research report which contains supplemental material.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

research methods report example

  • RMIT Australia
  • RMIT Europe
  • RMIT Vietnam
  • RMIT Global
  • RMIT Online
  • Alumni & Giving

RMIT University Library - Learning Lab

  • What will I do?
  • What will I need?
  • Who will help me?
  • About the institution
  • New to university?
  • Studying efficiently
  • Time management
  • Mind mapping
  • Note-taking
  • Reading skills
  • Argument analysis
  • Preparing for assessment
  • Critical thinking and argument analysis
  • Online learning skills
  • Starting my first assignment
  • Researching your assignment
  • What is referencing?
  • Understanding citations
  • When referencing isn't needed
  • Paraphrasing
  • Summarising
  • Synthesising
  • Integrating ideas with reporting words
  • Referencing with Easy Cite
  • Getting help with referencing
  • Acting with academic integrity
  • Artificial intelligence tools
  • Understanding your audience
  • Writing for coursework
  • Literature review
  • Academic style
  • Writing for the workplace
  • Spelling tips
  • Writing paragraphs
  • Writing sentences
  • Academic word lists
  • Annotated bibliographies
  • Artist statement
  • Case studies
  • Creating effective poster presentations
  • Essays, Reports, Reflective Writing
  • Law assessments
  • Oral presentations
  • Reflective writing
  • Art and design
  • Critical thinking
  • Maths and statistics
  • Sustainability
  • Educators' guide
  • Learning Lab content in context
  • Latest updates
  • Students Alumni & Giving Staff Library

Learning Lab

Getting started at uni, study skills, referencing.

  • When referencing isn't needed
  • Integrating ideas

Writing and assessments

  • Critical reading
  • Poster presentations
  • Postgraduate report writing

Subject areas

For educators.

  • Educators' guide
  • Methodology section in a report

Method/Methodology

The method section of a report details how the research was conducted, the research methods used and the reasons for choosing those methods. It should outline:

  • the participants and research methods used, e.g. surveys/questionnaire, interviews
  • refer to other relevant studies.

The methodology is a step-by-step explanation of the research process. It should be factual and is mainly written in the past tense.

Sample Methodology

The research used a quantitative methodology based on the approach advocated by Williams (2009). This study was conducted by questionnaire and investigated university teaching staff attitudes to the use of mobile phones in tutorials (see Appendix 1). The questionnaire used Likert scales to assess social attitudes (Jones 2007) to student mobile phone use and provided open-ended responses for additional comments. The survey was voluntary and anonymous. A total of 412 questionnaires were distributed online to randomly selected staff from each of the three colleges within the university. The completed questionnaires were returned by email.

  • 'Describe' is short for: describing how the research was done.
  • 'Refers' is short for: refers to relevant reading/literature.

[Describe: The research used a quantitative methodology based on the approach advocated by Williams (2009).] [Refer: This study was conducted by questionnaire and investigated university teaching staff attitudes to the use of mobile phones in tutorials (see Appendix 1). The questionnaire used Likert scales to assess social attitudes (Jones 2007) to student mobile phone use and provided open-ended responses for additional comments.] [Describes: The survey was voluntary and anonymous. A total of 412 questionnaires were distributed online to randomly selected staff from each of the three colleges within the university. The completed questionnaires were returned by email.]

  • Overall structure of a report
  • Example of a report
  • Report checklist
  • Writing a business research report

Still can't find what you need?

The RMIT University Library provides study support , one-on-one consultations and peer mentoring to RMIT students.

  • Facebook (opens in a new window)
  • Twitter (opens in a new window)
  • Instagram (opens in a new window)
  • Linkedin (opens in a new window)
  • YouTube (opens in a new window)
  • Weibo (opens in a new window)
  • Copyright © 2024 RMIT University |
  • Accessibility |
  • Learning Lab feedback |
  • Complaints |
  • ABN 49 781 030 034 |
  • CRICOS provider number: 00122A |
  • RTO Code: 3046 |
  • Open Universities Australia

research methods report example

Research Methodology Example

Detailed Walkthrough + Free Methodology Chapter Template

If you’re working on a dissertation or thesis and are looking for an example of a research methodology chapter , you’ve come to the right place.

In this video, we walk you through a research methodology from a dissertation that earned full distinction , step by step. We start off by discussing the core components of a research methodology by unpacking our free methodology chapter template . We then progress to the sample research methodology to show how these concepts are applied in an actual dissertation, thesis or research project.

If you’re currently working on your research methodology chapter, you may also find the following resources useful:

  • Research methodology 101 : an introductory video discussing what a methodology is and the role it plays within a dissertation
  • Research design 101 : an overview of the most common research designs for both qualitative and quantitative studies
  • Variables 101 : an introductory video covering the different types of variables that exist within research.
  • Sampling 101 : an overview of the main sampling methods
  • Methodology tips : a video discussion covering various tips to help you write a high-quality methodology chapter
  • Private coaching : Get hands-on help with your research methodology

Free Webinar: Research Methodology 101

PS – If you’re working on a dissertation, be sure to also check out our collection of dissertation and thesis examples here .

FAQ: Research Methodology Example

Research methodology example: frequently asked questions, is the sample research methodology real.

Yes. The chapter example is an extract from a Master’s-level dissertation for an MBA program. A few minor edits have been made to protect the privacy of the sponsoring organisation, but these have no material impact on the research methodology.

Can I replicate this methodology for my dissertation?

As we discuss in the video, every research methodology will be different, depending on the research aims, objectives and research questions. Therefore, you’ll need to tailor your literature review to suit your specific context.

You can learn more about the basics of writing a research methodology chapter here .

Where can I find more examples of research methodologies?

The best place to find more examples of methodology chapters would be within dissertation/thesis databases. These databases include dissertations, theses and research projects that have successfully passed the assessment criteria for the respective university, meaning that you have at least some sort of quality assurance.

The Open Access Thesis Database (OATD) is a good starting point.

How do I get the research methodology chapter template?

You can access our free methodology chapter template here .

Is the methodology template really free?

Yes. There is no cost for the template and you are free to use it as you wish.

Caroline

Great insights you are sharing here…

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing the Experimental Report: Methods, Results, and Discussion

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

Method section

Your method section provides a detailed overview of how you conducted your research. Because your study methods form a large part of your credibility as a researcher and writer, it is imperative that you be clear about what you did to gather information from participants in your study.

With your methods section, as with the sections above, you want to walk your readers through your study almost as if they were a participant. What happened first? What happened next?

The method section includes the following sub-sections.

I. Participants: Discuss who was enrolled in your experiment. Include major demographics that have an impact on the results of the experiment (i.e. if race is a factor, you should provide a breakdown by race). The accepted term for describing a person who participates in research studies is a participant not a subject.

II. Apparatus and materials: The apparatus is any equipment used during data collection (such as computers or eye-tracking devices). Materials include scripts, surveys, or software used for data collection (not data analysis). It is sometimes necessary to provide specific examples of materials or prompts, depending on the nature of your study.

III. Procedure: The procedure includes the step-by-step how of your experiment. The procedure should include:

  • A description of the experimental design and how participants were assigned conditions.
  • Identification of your independent variable(s) (IV), dependent variable(s) (DV), and control variables. Give your variables clear, meaningful names so that your readers are not confused.
  • Important instructions to participants.
  • A step-by-step listing in chronological order of what participants did during the experiment.

Results section

The results section is where you present the results of your research-both narrated for the readers in plain English and accompanied by statistics.

Note : Depending on the requirements or the projected length of your paper, sometimes the results are combined with the discussion section.

Organizing Results

Continue with your story in the results section. How do your results fit with the overall story you are telling? What results are the most compelling? You want to begin your discussion by reminding your readers once again what your hypotheses were and what your overall story is. Then provide each result as it relates to that story. The most important results should go first.

Preliminary discussion: Sometimes it is necessary to provide a preliminary discussion in your results section about your participant groups. In order to convince your readers that your results are meaningful, you must first demonstrate that the conditions of the study were met. For example, if you randomly assigned subjects into groups, are these two groups comparable? You can't discuss the differences in the two groups until you establish that the two groups can be compared.

Provide information on your data analysis: Be sure to describe the analysis you did. If you are using a non-conventional analysis, you also need to provide justification for why you are doing so.

Presenting Results : Bem (2006) recommends the following pattern for presenting findings:

  • Remind readers of the conceptual hypotheses or questions you are asking
  • Remind readers of behaviors measured or operations performed
  • Provide the answer/result in plain English
  • Provide the statistic that supports your plain English answer
  • Elaborate or qualify the overall conclusion if necessary

Writers new to psychology and writing with statistics often dump numbers at their readers without providing a clear narration of what those numbers mean. Please see our Writing with Statistics handout for more information on how to write with statistics.

Discussion section

Your discussion section is where you talk about what your results mean and where you wrap up the overall story you are telling. This is where you interpret your findings, evaluate your hypotheses or research questions, discuss unexpected results, and tie your findings to the previous literature (discussed first in your literature review). Your discussion section should move from specific to general.

Here are some tips for writing your discussion section.

  • Begin by providing an interpretation of your results: what is it that you have learned from your research?
  • Discuss each hypotheses or research question in more depth.
  • Do not repeat what you have already said in your results—instead, focus on adding new information and broadening the perspective of your results to you reader.
  • Discuss how your results compare to previous findings in the literature. If there are differences, discuss why you think these differences exist and what they could mean.
  • Briefly consider your study's limitations, but do not dwell on its flaws.
  • Consider also what new questions your study raises, what questions your study was not able to answer, and what avenues future research could take in this area.

Example: Here is how this works.

References section

References should be in standard APA format. Please see our APA Formatting guide for specific instructions.

The Writing Center • University of North Carolina at Chapel Hill

Scientific Reports

What this handout is about.

This handout provides a general guide to writing reports about scientific research you’ve performed. In addition to describing the conventional rules about the format and content of a lab report, we’ll also attempt to convey why these rules exist, so you’ll get a clearer, more dependable idea of how to approach this writing situation. Readers of this handout may also find our handout on writing in the sciences useful.

Background and pre-writing

Why do we write research reports.

You did an experiment or study for your science class, and now you have to write it up for your teacher to review. You feel that you understood the background sufficiently, designed and completed the study effectively, obtained useful data, and can use those data to draw conclusions about a scientific process or principle. But how exactly do you write all that? What is your teacher expecting to see?

To take some of the guesswork out of answering these questions, try to think beyond the classroom setting. In fact, you and your teacher are both part of a scientific community, and the people who participate in this community tend to share the same values. As long as you understand and respect these values, your writing will likely meet the expectations of your audience—including your teacher.

So why are you writing this research report? The practical answer is “Because the teacher assigned it,” but that’s classroom thinking. Generally speaking, people investigating some scientific hypothesis have a responsibility to the rest of the scientific world to report their findings, particularly if these findings add to or contradict previous ideas. The people reading such reports have two primary goals:

  • They want to gather the information presented.
  • They want to know that the findings are legitimate.

Your job as a writer, then, is to fulfill these two goals.

How do I do that?

Good question. Here is the basic format scientists have designed for research reports:

  • Introduction

Methods and Materials

This format, sometimes called “IMRAD,” may take slightly different shapes depending on the discipline or audience; some ask you to include an abstract or separate section for the hypothesis, or call the Discussion section “Conclusions,” or change the order of the sections (some professional and academic journals require the Methods section to appear last). Overall, however, the IMRAD format was devised to represent a textual version of the scientific method.

The scientific method, you’ll probably recall, involves developing a hypothesis, testing it, and deciding whether your findings support the hypothesis. In essence, the format for a research report in the sciences mirrors the scientific method but fleshes out the process a little. Below, you’ll find a table that shows how each written section fits into the scientific method and what additional information it offers the reader.

states your hypothesis explains how you derived that hypothesis and how it connects to previous research; gives the purpose of the experiment/study
details how you tested your hypothesis clarifies why you performed your study in that particular way
provides raw (i.e., uninterpreted) data collected (perhaps) expresses the data in table form, as an easy-to-read figure, or as percentages/ratios
considers whether the data you obtained support the hypothesis explores the implications of your finding and judges the potential limitations of your experimental design

Thinking of your research report as based on the scientific method, but elaborated in the ways described above, may help you to meet your audience’s expectations successfully. We’re going to proceed by explicitly connecting each section of the lab report to the scientific method, then explaining why and how you need to elaborate that section.

Although this handout takes each section in the order in which it should be presented in the final report, you may for practical reasons decide to compose sections in another order. For example, many writers find that composing their Methods and Results before the other sections helps to clarify their idea of the experiment or study as a whole. You might consider using each assignment to practice different approaches to drafting the report, to find the order that works best for you.

What should I do before drafting the lab report?

The best way to prepare to write the lab report is to make sure that you fully understand everything you need to about the experiment. Obviously, if you don’t quite know what went on during the lab, you’re going to find it difficult to explain the lab satisfactorily to someone else. To make sure you know enough to write the report, complete the following steps:

  • What are we going to do in this lab? (That is, what’s the procedure?)
  • Why are we going to do it that way?
  • What are we hoping to learn from this experiment?
  • Why would we benefit from this knowledge?
  • Consult your lab supervisor as you perform the lab. If you don’t know how to answer one of the questions above, for example, your lab supervisor will probably be able to explain it to you (or, at least, help you figure it out).
  • Plan the steps of the experiment carefully with your lab partners. The less you rush, the more likely it is that you’ll perform the experiment correctly and record your findings accurately. Also, take some time to think about the best way to organize the data before you have to start putting numbers down. If you can design a table to account for the data, that will tend to work much better than jotting results down hurriedly on a scrap piece of paper.
  • Record the data carefully so you get them right. You won’t be able to trust your conclusions if you have the wrong data, and your readers will know you messed up if the other three people in your group have “97 degrees” and you have “87.”
  • Consult with your lab partners about everything you do. Lab groups often make one of two mistakes: two people do all the work while two have a nice chat, or everybody works together until the group finishes gathering the raw data, then scrams outta there. Collaborate with your partners, even when the experiment is “over.” What trends did you observe? Was the hypothesis supported? Did you all get the same results? What kind of figure should you use to represent your findings? The whole group can work together to answer these questions.
  • Consider your audience. You may believe that audience is a non-issue: it’s your lab TA, right? Well, yes—but again, think beyond the classroom. If you write with only your lab instructor in mind, you may omit material that is crucial to a complete understanding of your experiment, because you assume the instructor knows all that stuff already. As a result, you may receive a lower grade, since your TA won’t be sure that you understand all the principles at work. Try to write towards a student in the same course but a different lab section. That student will have a fair degree of scientific expertise but won’t know much about your experiment particularly. Alternatively, you could envision yourself five years from now, after the reading and lectures for this course have faded a bit. What would you remember, and what would you need explained more clearly (as a refresher)?

Once you’ve completed these steps as you perform the experiment, you’ll be in a good position to draft an effective lab report.

Introductions

How do i write a strong introduction.

For the purposes of this handout, we’ll consider the Introduction to contain four basic elements: the purpose, the scientific literature relevant to the subject, the hypothesis, and the reasons you believed your hypothesis viable. Let’s start by going through each element of the Introduction to clarify what it covers and why it’s important. Then we can formulate a logical organizational strategy for the section.

The inclusion of the purpose (sometimes called the objective) of the experiment often confuses writers. The biggest misconception is that the purpose is the same as the hypothesis. Not quite. We’ll get to hypotheses in a minute, but basically they provide some indication of what you expect the experiment to show. The purpose is broader, and deals more with what you expect to gain through the experiment. In a professional setting, the hypothesis might have something to do with how cells react to a certain kind of genetic manipulation, but the purpose of the experiment is to learn more about potential cancer treatments. Undergraduate reports don’t often have this wide-ranging a goal, but you should still try to maintain the distinction between your hypothesis and your purpose. In a solubility experiment, for example, your hypothesis might talk about the relationship between temperature and the rate of solubility, but the purpose is probably to learn more about some specific scientific principle underlying the process of solubility.

For starters, most people say that you should write out your working hypothesis before you perform the experiment or study. Many beginning science students neglect to do so and find themselves struggling to remember precisely which variables were involved in the process or in what way the researchers felt that they were related. Write your hypothesis down as you develop it—you’ll be glad you did.

As for the form a hypothesis should take, it’s best not to be too fancy or complicated; an inventive style isn’t nearly so important as clarity here. There’s nothing wrong with beginning your hypothesis with the phrase, “It was hypothesized that . . .” Be as specific as you can about the relationship between the different objects of your study. In other words, explain that when term A changes, term B changes in this particular way. Readers of scientific writing are rarely content with the idea that a relationship between two terms exists—they want to know what that relationship entails.

Not a hypothesis:

“It was hypothesized that there is a significant relationship between the temperature of a solvent and the rate at which a solute dissolves.”

Hypothesis:

“It was hypothesized that as the temperature of a solvent increases, the rate at which a solute will dissolve in that solvent increases.”

Put more technically, most hypotheses contain both an independent and a dependent variable. The independent variable is what you manipulate to test the reaction; the dependent variable is what changes as a result of your manipulation. In the example above, the independent variable is the temperature of the solvent, and the dependent variable is the rate of solubility. Be sure that your hypothesis includes both variables.

Justify your hypothesis

You need to do more than tell your readers what your hypothesis is; you also need to assure them that this hypothesis was reasonable, given the circumstances. In other words, use the Introduction to explain that you didn’t just pluck your hypothesis out of thin air. (If you did pluck it out of thin air, your problems with your report will probably extend beyond using the appropriate format.) If you posit that a particular relationship exists between the independent and the dependent variable, what led you to believe your “guess” might be supported by evidence?

Scientists often refer to this type of justification as “motivating” the hypothesis, in the sense that something propelled them to make that prediction. Often, motivation includes what we already know—or rather, what scientists generally accept as true (see “Background/previous research” below). But you can also motivate your hypothesis by relying on logic or on your own observations. If you’re trying to decide which solutes will dissolve more rapidly in a solvent at increased temperatures, you might remember that some solids are meant to dissolve in hot water (e.g., bouillon cubes) and some are used for a function precisely because they withstand higher temperatures (they make saucepans out of something). Or you can think about whether you’ve noticed sugar dissolving more rapidly in your glass of iced tea or in your cup of coffee. Even such basic, outside-the-lab observations can help you justify your hypothesis as reasonable.

Background/previous research

This part of the Introduction demonstrates to the reader your awareness of how you’re building on other scientists’ work. If you think of the scientific community as engaging in a series of conversations about various topics, then you’ll recognize that the relevant background material will alert the reader to which conversation you want to enter.

Generally speaking, authors writing journal articles use the background for slightly different purposes than do students completing assignments. Because readers of academic journals tend to be professionals in the field, authors explain the background in order to permit readers to evaluate the study’s pertinence for their own work. You, on the other hand, write toward a much narrower audience—your peers in the course or your lab instructor—and so you must demonstrate that you understand the context for the (presumably assigned) experiment or study you’ve completed. For example, if your professor has been talking about polarity during lectures, and you’re doing a solubility experiment, you might try to connect the polarity of a solid to its relative solubility in certain solvents. In any event, both professional researchers and undergraduates need to connect the background material overtly to their own work.

Organization of this section

Most of the time, writers begin by stating the purpose or objectives of their own work, which establishes for the reader’s benefit the “nature and scope of the problem investigated” (Day 1994). Once you have expressed your purpose, you should then find it easier to move from the general purpose, to relevant material on the subject, to your hypothesis. In abbreviated form, an Introduction section might look like this:

“The purpose of the experiment was to test conventional ideas about solubility in the laboratory [purpose] . . . According to Whitecoat and Labrat (1999), at higher temperatures the molecules of solvents move more quickly . . . We know from the class lecture that molecules moving at higher rates of speed collide with one another more often and thus break down more easily [background material/motivation] . . . Thus, it was hypothesized that as the temperature of a solvent increases, the rate at which a solute will dissolve in that solvent increases [hypothesis].”

Again—these are guidelines, not commandments. Some writers and readers prefer different structures for the Introduction. The one above merely illustrates a common approach to organizing material.

How do I write a strong Materials and Methods section?

As with any piece of writing, your Methods section will succeed only if it fulfills its readers’ expectations, so you need to be clear in your own mind about the purpose of this section. Let’s review the purpose as we described it above: in this section, you want to describe in detail how you tested the hypothesis you developed and also to clarify the rationale for your procedure. In science, it’s not sufficient merely to design and carry out an experiment. Ultimately, others must be able to verify your findings, so your experiment must be reproducible, to the extent that other researchers can follow the same procedure and obtain the same (or similar) results.

Here’s a real-world example of the importance of reproducibility. In 1989, physicists Stanley Pons and Martin Fleischman announced that they had discovered “cold fusion,” a way of producing excess heat and power without the nuclear radiation that accompanies “hot fusion.” Such a discovery could have great ramifications for the industrial production of energy, so these findings created a great deal of interest. When other scientists tried to duplicate the experiment, however, they didn’t achieve the same results, and as a result many wrote off the conclusions as unjustified (or worse, a hoax). To this day, the viability of cold fusion is debated within the scientific community, even though an increasing number of researchers believe it possible. So when you write your Methods section, keep in mind that you need to describe your experiment well enough to allow others to replicate it exactly.

With these goals in mind, let’s consider how to write an effective Methods section in terms of content, structure, and style.

Sometimes the hardest thing about writing this section isn’t what you should talk about, but what you shouldn’t talk about. Writers often want to include the results of their experiment, because they measured and recorded the results during the course of the experiment. But such data should be reserved for the Results section. In the Methods section, you can write that you recorded the results, or how you recorded the results (e.g., in a table), but you shouldn’t write what the results were—not yet. Here, you’re merely stating exactly how you went about testing your hypothesis. As you draft your Methods section, ask yourself the following questions:

  • How much detail? Be precise in providing details, but stay relevant. Ask yourself, “Would it make any difference if this piece were a different size or made from a different material?” If not, you probably don’t need to get too specific. If so, you should give as many details as necessary to prevent this experiment from going awry if someone else tries to carry it out. Probably the most crucial detail is measurement; you should always quantify anything you can, such as time elapsed, temperature, mass, volume, etc.
  • Rationale: Be sure that as you’re relating your actions during the experiment, you explain your rationale for the protocol you developed. If you capped a test tube immediately after adding a solute to a solvent, why did you do that? (That’s really two questions: why did you cap it, and why did you cap it immediately?) In a professional setting, writers provide their rationale as a way to explain their thinking to potential critics. On one hand, of course, that’s your motivation for talking about protocol, too. On the other hand, since in practical terms you’re also writing to your teacher (who’s seeking to evaluate how well you comprehend the principles of the experiment), explaining the rationale indicates that you understand the reasons for conducting the experiment in that way, and that you’re not just following orders. Critical thinking is crucial—robots don’t make good scientists.
  • Control: Most experiments will include a control, which is a means of comparing experimental results. (Sometimes you’ll need to have more than one control, depending on the number of hypotheses you want to test.) The control is exactly the same as the other items you’re testing, except that you don’t manipulate the independent variable-the condition you’re altering to check the effect on the dependent variable. For example, if you’re testing solubility rates at increased temperatures, your control would be a solution that you didn’t heat at all; that way, you’ll see how quickly the solute dissolves “naturally” (i.e., without manipulation), and you’ll have a point of reference against which to compare the solutions you did heat.

Describe the control in the Methods section. Two things are especially important in writing about the control: identify the control as a control, and explain what you’re controlling for. Here is an example:

“As a control for the temperature change, we placed the same amount of solute in the same amount of solvent, and let the solution stand for five minutes without heating it.”

Structure and style

Organization is especially important in the Methods section of a lab report because readers must understand your experimental procedure completely. Many writers are surprised by the difficulty of conveying what they did during the experiment, since after all they’re only reporting an event, but it’s often tricky to present this information in a coherent way. There’s a fairly standard structure you can use to guide you, and following the conventions for style can help clarify your points.

  • Subsections: Occasionally, researchers use subsections to report their procedure when the following circumstances apply: 1) if they’ve used a great many materials; 2) if the procedure is unusually complicated; 3) if they’ve developed a procedure that won’t be familiar to many of their readers. Because these conditions rarely apply to the experiments you’ll perform in class, most undergraduate lab reports won’t require you to use subsections. In fact, many guides to writing lab reports suggest that you try to limit your Methods section to a single paragraph.
  • Narrative structure: Think of this section as telling a story about a group of people and the experiment they performed. Describe what you did in the order in which you did it. You may have heard the old joke centered on the line, “Disconnect the red wire, but only after disconnecting the green wire,” where the person reading the directions blows everything to kingdom come because the directions weren’t in order. We’re used to reading about events chronologically, and so your readers will generally understand what you did if you present that information in the same way. Also, since the Methods section does generally appear as a narrative (story), you want to avoid the “recipe” approach: “First, take a clean, dry 100 ml test tube from the rack. Next, add 50 ml of distilled water.” You should be reporting what did happen, not telling the reader how to perform the experiment: “50 ml of distilled water was poured into a clean, dry 100 ml test tube.” Hint: most of the time, the recipe approach comes from copying down the steps of the procedure from your lab manual, so you may want to draft the Methods section initially without consulting your manual. Later, of course, you can go back and fill in any part of the procedure you inadvertently overlooked.
  • Past tense: Remember that you’re describing what happened, so you should use past tense to refer to everything you did during the experiment. Writers are often tempted to use the imperative (“Add 5 g of the solid to the solution”) because that’s how their lab manuals are worded; less frequently, they use present tense (“5 g of the solid are added to the solution”). Instead, remember that you’re talking about an event which happened at a particular time in the past, and which has already ended by the time you start writing, so simple past tense will be appropriate in this section (“5 g of the solid were added to the solution” or “We added 5 g of the solid to the solution”).
  • Active: We heated the solution to 80°C. (The subject, “we,” performs the action, heating.)
  • Passive: The solution was heated to 80°C. (The subject, “solution,” doesn’t do the heating–it is acted upon, not acting.)

Increasingly, especially in the social sciences, using first person and active voice is acceptable in scientific reports. Most readers find that this style of writing conveys information more clearly and concisely. This rhetorical choice thus brings two scientific values into conflict: objectivity versus clarity. Since the scientific community hasn’t reached a consensus about which style it prefers, you may want to ask your lab instructor.

How do I write a strong Results section?

Here’s a paradox for you. The Results section is often both the shortest (yay!) and most important (uh-oh!) part of your report. Your Materials and Methods section shows how you obtained the results, and your Discussion section explores the significance of the results, so clearly the Results section forms the backbone of the lab report. This section provides the most critical information about your experiment: the data that allow you to discuss how your hypothesis was or wasn’t supported. But it doesn’t provide anything else, which explains why this section is generally shorter than the others.

Before you write this section, look at all the data you collected to figure out what relates significantly to your hypothesis. You’ll want to highlight this material in your Results section. Resist the urge to include every bit of data you collected, since perhaps not all are relevant. Also, don’t try to draw conclusions about the results—save them for the Discussion section. In this section, you’re reporting facts. Nothing your readers can dispute should appear in the Results section.

Most Results sections feature three distinct parts: text, tables, and figures. Let’s consider each part one at a time.

This should be a short paragraph, generally just a few lines, that describes the results you obtained from your experiment. In a relatively simple experiment, one that doesn’t produce a lot of data for you to repeat, the text can represent the entire Results section. Don’t feel that you need to include lots of extraneous detail to compensate for a short (but effective) text; your readers appreciate discrimination more than your ability to recite facts. In a more complex experiment, you may want to use tables and/or figures to help guide your readers toward the most important information you gathered. In that event, you’ll need to refer to each table or figure directly, where appropriate:

“Table 1 lists the rates of solubility for each substance”

“Solubility increased as the temperature of the solution increased (see Figure 1).”

If you do use tables or figures, make sure that you don’t present the same material in both the text and the tables/figures, since in essence you’ll just repeat yourself, probably annoying your readers with the redundancy of your statements.

Feel free to describe trends that emerge as you examine the data. Although identifying trends requires some judgment on your part and so may not feel like factual reporting, no one can deny that these trends do exist, and so they properly belong in the Results section. Example:

“Heating the solution increased the rate of solubility of polar solids by 45% but had no effect on the rate of solubility in solutions containing non-polar solids.”

This point isn’t debatable—you’re just pointing out what the data show.

As in the Materials and Methods section, you want to refer to your data in the past tense, because the events you recorded have already occurred and have finished occurring. In the example above, note the use of “increased” and “had,” rather than “increases” and “has.” (You don’t know from your experiment that heating always increases the solubility of polar solids, but it did that time.)

You shouldn’t put information in the table that also appears in the text. You also shouldn’t use a table to present irrelevant data, just to show you did collect these data during the experiment. Tables are good for some purposes and situations, but not others, so whether and how you’ll use tables depends upon what you need them to accomplish.

Tables are useful ways to show variation in data, but not to present a great deal of unchanging measurements. If you’re dealing with a scientific phenomenon that occurs only within a certain range of temperatures, for example, you don’t need to use a table to show that the phenomenon didn’t occur at any of the other temperatures. How useful is this table?

A table labeled Effect of Temperature on Rate of Solubility with temperature of solvent values in 10-degree increments from -20 degrees Celsius to 80 degrees Celsius that does not show a corresponding rate of solubility value until 50 degrees Celsius.

As you can probably see, no solubility was observed until the trial temperature reached 50°C, a fact that the text part of the Results section could easily convey. The table could then be limited to what happened at 50°C and higher, thus better illustrating the differences in solubility rates when solubility did occur.

As a rule, try not to use a table to describe any experimental event you can cover in one sentence of text. Here’s an example of an unnecessary table from How to Write and Publish a Scientific Paper , by Robert A. Day:

A table labeled Oxygen requirements of various species of Streptomyces showing the names of organisms and two columns that indicate growth under aerobic conditions and growth under anaerobic conditions with a plus or minus symbol for each organism in the growth columns to indicate value.

As Day notes, all the information in this table can be summarized in one sentence: “S. griseus, S. coelicolor, S. everycolor, and S. rainbowenski grew under aerobic conditions, whereas S. nocolor and S. greenicus required anaerobic conditions.” Most readers won’t find the table clearer than that one sentence.

When you do have reason to tabulate material, pay attention to the clarity and readability of the format you use. Here are a few tips:

  • Number your table. Then, when you refer to the table in the text, use that number to tell your readers which table they can review to clarify the material.
  • Give your table a title. This title should be descriptive enough to communicate the contents of the table, but not so long that it becomes difficult to follow. The titles in the sample tables above are acceptable.
  • Arrange your table so that readers read vertically, not horizontally. For the most part, this rule means that you should construct your table so that like elements read down, not across. Think about what you want your readers to compare, and put that information in the column (up and down) rather than in the row (across). Usually, the point of comparison will be the numerical data you collect, so especially make sure you have columns of numbers, not rows.Here’s an example of how drastically this decision affects the readability of your table (from A Short Guide to Writing about Chemistry , by Herbert Beall and John Trimbur). Look at this table, which presents the relevant data in horizontal rows:

A table labeled Boyle's Law Experiment: Measuring Volume as a Function of Pressure that presents the trial number, length of air sample in millimeters, and height difference in inches of mercury, each of which is presented in rows horizontally.

It’s a little tough to see the trends that the author presumably wants to present in this table. Compare this table, in which the data appear vertically:

A table labeled Boyle's Law Experiment: Measuring Volume as a Function of Pressure that presents the trial number, length of air sample in millimeters, and height difference in inches of mercury, each of which is presented in columns vertically.

The second table shows how putting like elements in a vertical column makes for easier reading. In this case, the like elements are the measurements of length and height, over five trials–not, as in the first table, the length and height measurements for each trial.

  • Make sure to include units of measurement in the tables. Readers might be able to guess that you measured something in millimeters, but don’t make them try.
1058
432
7
  • Don’t use vertical lines as part of the format for your table. This convention exists because journals prefer not to have to reproduce these lines because the tables then become more expensive to print. Even though it’s fairly unlikely that you’ll be sending your Biology 11 lab report to Science for publication, your readers still have this expectation. Consequently, if you use the table-drawing option in your word-processing software, choose the option that doesn’t rely on a “grid” format (which includes vertical lines).

How do I include figures in my report?

Although tables can be useful ways of showing trends in the results you obtained, figures (i.e., illustrations) can do an even better job of emphasizing such trends. Lab report writers often use graphic representations of the data they collected to provide their readers with a literal picture of how the experiment went.

When should you use a figure?

Remember the circumstances under which you don’t need a table: when you don’t have a great deal of data or when the data you have don’t vary a lot. Under the same conditions, you would probably forgo the figure as well, since the figure would be unlikely to provide your readers with an additional perspective. Scientists really don’t like their time wasted, so they tend not to respond favorably to redundancy.

If you’re trying to decide between using a table and creating a figure to present your material, consider the following a rule of thumb. The strength of a table lies in its ability to supply large amounts of exact data, whereas the strength of a figure is its dramatic illustration of important trends within the experiment. If you feel that your readers won’t get the full impact of the results you obtained just by looking at the numbers, then a figure might be appropriate.

Of course, an undergraduate class may expect you to create a figure for your lab experiment, if only to make sure that you can do so effectively. If this is the case, then don’t worry about whether to use figures or not—concentrate instead on how best to accomplish your task.

Figures can include maps, photographs, pen-and-ink drawings, flow charts, bar graphs, and section graphs (“pie charts”). But the most common figure by far, especially for undergraduates, is the line graph, so we’ll focus on that type in this handout.

At the undergraduate level, you can often draw and label your graphs by hand, provided that the result is clear, legible, and drawn to scale. Computer technology has, however, made creating line graphs a lot easier. Most word-processing software has a number of functions for transferring data into graph form; many scientists have found Microsoft Excel, for example, a helpful tool in graphing results. If you plan on pursuing a career in the sciences, it may be well worth your while to learn to use a similar program.

Computers can’t, however, decide for you how your graph really works; you have to know how to design your graph to meet your readers’ expectations. Here are some of these expectations:

  • Keep it as simple as possible. You may be tempted to signal the complexity of the information you gathered by trying to design a graph that accounts for that complexity. But remember the purpose of your graph: to dramatize your results in a manner that’s easy to see and grasp. Try not to make the reader stare at the graph for a half hour to find the important line among the mass of other lines. For maximum effectiveness, limit yourself to three to five lines per graph; if you have more data to demonstrate, use a set of graphs to account for it, rather than trying to cram it all into a single figure.
  • Plot the independent variable on the horizontal (x) axis and the dependent variable on the vertical (y) axis. Remember that the independent variable is the condition that you manipulated during the experiment and the dependent variable is the condition that you measured to see if it changed along with the independent variable. Placing the variables along their respective axes is mostly just a convention, but since your readers are accustomed to viewing graphs in this way, you’re better off not challenging the convention in your report.
  • Label each axis carefully, and be especially careful to include units of measure. You need to make sure that your readers understand perfectly well what your graph indicates.
  • Number and title your graphs. As with tables, the title of the graph should be informative but concise, and you should refer to your graph by number in the text (e.g., “Figure 1 shows the increase in the solubility rate as a function of temperature”).
  • Many editors of professional scientific journals prefer that writers distinguish the lines in their graphs by attaching a symbol to them, usually a geometric shape (triangle, square, etc.), and using that symbol throughout the curve of the line. Generally, readers have a hard time distinguishing dotted lines from dot-dash lines from straight lines, so you should consider staying away from this system. Editors don’t usually like different-colored lines within a graph because colors are difficult and expensive to reproduce; colors may, however, be great for your purposes, as long as you’re not planning to submit your paper to Nature. Use your discretion—try to employ whichever technique dramatizes the results most effectively.
  • Try to gather data at regular intervals, so the plot points on your graph aren’t too far apart. You can’t be sure of the arc you should draw between the plot points if the points are located at the far corners of the graph; over a fifteen-minute interval, perhaps the change occurred in the first or last thirty seconds of that period (in which case your straight-line connection between the points is misleading).
  • If you’re worried that you didn’t collect data at sufficiently regular intervals during your experiment, go ahead and connect the points with a straight line, but you may want to examine this problem as part of your Discussion section.
  • Make your graph large enough so that everything is legible and clearly demarcated, but not so large that it either overwhelms the rest of the Results section or provides a far greater range than you need to illustrate your point. If, for example, the seedlings of your plant grew only 15 mm during the trial, you don’t need to construct a graph that accounts for 100 mm of growth. The lines in your graph should more or less fill the space created by the axes; if you see that your data is confined to the lower left portion of the graph, you should probably re-adjust your scale.
  • If you create a set of graphs, make them the same size and format, including all the verbal and visual codes (captions, symbols, scale, etc.). You want to be as consistent as possible in your illustrations, so that your readers can easily make the comparisons you’re trying to get them to see.

How do I write a strong Discussion section?

The discussion section is probably the least formalized part of the report, in that you can’t really apply the same structure to every type of experiment. In simple terms, here you tell your readers what to make of the Results you obtained. If you have done the Results part well, your readers should already recognize the trends in the data and have a fairly clear idea of whether your hypothesis was supported. Because the Results can seem so self-explanatory, many students find it difficult to know what material to add in this last section.

Basically, the Discussion contains several parts, in no particular order, but roughly moving from specific (i.e., related to your experiment only) to general (how your findings fit in the larger scientific community). In this section, you will, as a rule, need to:

Explain whether the data support your hypothesis

  • Acknowledge any anomalous data or deviations from what you expected

Derive conclusions, based on your findings, about the process you’re studying

  • Relate your findings to earlier work in the same area (if you can)

Explore the theoretical and/or practical implications of your findings

Let’s look at some dos and don’ts for each of these objectives.

This statement is usually a good way to begin the Discussion, since you can’t effectively speak about the larger scientific value of your study until you’ve figured out the particulars of this experiment. You might begin this part of the Discussion by explicitly stating the relationships or correlations your data indicate between the independent and dependent variables. Then you can show more clearly why you believe your hypothesis was or was not supported. For example, if you tested solubility at various temperatures, you could start this section by noting that the rates of solubility increased as the temperature increased. If your initial hypothesis surmised that temperature change would not affect solubility, you would then say something like,

“The hypothesis that temperature change would not affect solubility was not supported by the data.”

Note: Students tend to view labs as practical tests of undeniable scientific truths. As a result, you may want to say that the hypothesis was “proved” or “disproved” or that it was “correct” or “incorrect.” These terms, however, reflect a degree of certainty that you as a scientist aren’t supposed to have. Remember, you’re testing a theory with a procedure that lasts only a few hours and relies on only a few trials, which severely compromises your ability to be sure about the “truth” you see. Words like “supported,” “indicated,” and “suggested” are more acceptable ways to evaluate your hypothesis.

Also, recognize that saying whether the data supported your hypothesis or not involves making a claim to be defended. As such, you need to show the readers that this claim is warranted by the evidence. Make sure that you’re very explicit about the relationship between the evidence and the conclusions you draw from it. This process is difficult for many writers because we don’t often justify conclusions in our regular lives. For example, you might nudge your friend at a party and whisper, “That guy’s drunk,” and once your friend lays eyes on the person in question, she might readily agree. In a scientific paper, by contrast, you would need to defend your claim more thoroughly by pointing to data such as slurred words, unsteady gait, and the lampshade-as-hat. In addition to pointing out these details, you would also need to show how (according to previous studies) these signs are consistent with inebriation, especially if they occur in conjunction with one another. To put it another way, tell your readers exactly how you got from point A (was the hypothesis supported?) to point B (yes/no).

Acknowledge any anomalous data, or deviations from what you expected

You need to take these exceptions and divergences into account, so that you qualify your conclusions sufficiently. For obvious reasons, your readers will doubt your authority if you (deliberately or inadvertently) overlook a key piece of data that doesn’t square with your perspective on what occurred. In a more philosophical sense, once you’ve ignored evidence that contradicts your claims, you’ve departed from the scientific method. The urge to “tidy up” the experiment is often strong, but if you give in to it you’re no longer performing good science.

Sometimes after you’ve performed a study or experiment, you realize that some part of the methods you used to test your hypothesis was flawed. In that case, it’s OK to suggest that if you had the chance to conduct your test again, you might change the design in this or that specific way in order to avoid such and such a problem. The key to making this approach work, though, is to be very precise about the weakness in your experiment, why and how you think that weakness might have affected your data, and how you would alter your protocol to eliminate—or limit the effects of—that weakness. Often, inexperienced researchers and writers feel the need to account for “wrong” data (remember, there’s no such animal), and so they speculate wildly about what might have screwed things up. These speculations include such factors as the unusually hot temperature in the room, or the possibility that their lab partners read the meters wrong, or the potentially defective equipment. These explanations are what scientists call “cop-outs,” or “lame”; don’t indicate that the experiment had a weakness unless you’re fairly certain that a) it really occurred and b) you can explain reasonably well how that weakness affected your results.

If, for example, your hypothesis dealt with the changes in solubility at different temperatures, then try to figure out what you can rationally say about the process of solubility more generally. If you’re doing an undergraduate lab, chances are that the lab will connect in some way to the material you’ve been covering either in lecture or in your reading, so you might choose to return to these resources as a way to help you think clearly about the process as a whole.

This part of the Discussion section is another place where you need to make sure that you’re not overreaching. Again, nothing you’ve found in one study would remotely allow you to claim that you now “know” something, or that something isn’t “true,” or that your experiment “confirmed” some principle or other. Hesitate before you go out on a limb—it’s dangerous! Use less absolutely conclusive language, including such words as “suggest,” “indicate,” “correspond,” “possibly,” “challenge,” etc.

Relate your findings to previous work in the field (if possible)

We’ve been talking about how to show that you belong in a particular community (such as biologists or anthropologists) by writing within conventions that they recognize and accept. Another is to try to identify a conversation going on among members of that community, and use your work to contribute to that conversation. In a larger philosophical sense, scientists can’t fully understand the value of their research unless they have some sense of the context that provoked and nourished it. That is, you have to recognize what’s new about your project (potentially, anyway) and how it benefits the wider body of scientific knowledge. On a more pragmatic level, especially for undergraduates, connecting your lab work to previous research will demonstrate to the TA that you see the big picture. You have an opportunity, in the Discussion section, to distinguish yourself from the students in your class who aren’t thinking beyond the barest facts of the study. Capitalize on this opportunity by putting your own work in context.

If you’re just beginning to work in the natural sciences (as a first-year biology or chemistry student, say), most likely the work you’ll be doing has already been performed and re-performed to a satisfactory degree. Hence, you could probably point to a similar experiment or study and compare/contrast your results and conclusions. More advanced work may deal with an issue that is somewhat less “resolved,” and so previous research may take the form of an ongoing debate, and you can use your own work to weigh in on that debate. If, for example, researchers are hotly disputing the value of herbal remedies for the common cold, and the results of your study suggest that Echinacea diminishes the symptoms but not the actual presence of the cold, then you might want to take some time in the Discussion section to recapitulate the specifics of the dispute as it relates to Echinacea as an herbal remedy. (Consider that you have probably already written in the Introduction about this debate as background research.)

This information is often the best way to end your Discussion (and, for all intents and purposes, the report). In argumentative writing generally, you want to use your closing words to convey the main point of your writing. This main point can be primarily theoretical (“Now that you understand this information, you’re in a better position to understand this larger issue”) or primarily practical (“You can use this information to take such and such an action”). In either case, the concluding statements help the reader to comprehend the significance of your project and your decision to write about it.

Since a lab report is argumentative—after all, you’re investigating a claim, and judging the legitimacy of that claim by generating and collecting evidence—it’s often a good idea to end your report with the same technique for establishing your main point. If you want to go the theoretical route, you might talk about the consequences your study has for the field or phenomenon you’re investigating. To return to the examples regarding solubility, you could end by reflecting on what your work on solubility as a function of temperature tells us (potentially) about solubility in general. (Some folks consider this type of exploration “pure” as opposed to “applied” science, although these labels can be problematic.) If you want to go the practical route, you could end by speculating about the medical, institutional, or commercial implications of your findings—in other words, answer the question, “What can this study help people to do?” In either case, you’re going to make your readers’ experience more satisfying, by helping them see why they spent their time learning what you had to teach them.

Works consulted

We consulted these works while writing this handout. This is not a comprehensive list of resources on the handout’s topic, and we encourage you to do your own research to find additional publications. Please do not use this list as a model for the format of your own reference list, as it may not match the citation style you are using. For guidance on formatting citations, please see the UNC Libraries citation tutorial . We revise these tips periodically and welcome feedback.

American Psychological Association. 2010. Publication Manual of the American Psychological Association . 6th ed. Washington, DC: American Psychological Association.

Beall, Herbert, and John Trimbur. 2001. A Short Guide to Writing About Chemistry , 2nd ed. New York: Longman.

Blum, Deborah, and Mary Knudson. 1997. A Field Guide for Science Writers: The Official Guide of the National Association of Science Writers . New York: Oxford University Press.

Booth, Wayne C., Gregory G. Colomb, Joseph M. Williams, Joseph Bizup, and William T. FitzGerald. 2016. The Craft of Research , 4th ed. Chicago: University of Chicago Press.

Briscoe, Mary Helen. 1996. Preparing Scientific Illustrations: A Guide to Better Posters, Presentations, and Publications , 2nd ed. New York: Springer-Verlag.

Council of Science Editors. 2014. Scientific Style and Format: The CSE Manual for Authors, Editors, and Publishers , 8th ed. Chicago & London: University of Chicago Press.

Davis, Martha. 2012. Scientific Papers and Presentations , 3rd ed. London: Academic Press.

Day, Robert A. 1994. How to Write and Publish a Scientific Paper , 4th ed. Phoenix: Oryx Press.

Porush, David. 1995. A Short Guide to Writing About Science . New York: Longman.

Williams, Joseph, and Joseph Bizup. 2017. Style: Lessons in Clarity and Grace , 12th ed. Boston: Pearson.

You may reproduce it for non-commercial use if you use the entire handout and attribute the source: The Writing Center, University of North Carolina at Chapel Hill

Make a Gift

Newly Launched - AI Presentation Maker

SlideTeam

Researched by Consultants from Top-Tier Management Companies

Banner Image

AI PPT Maker

Powerpoint Templates

Icon Bundle

Kpi Dashboard

Professional

Business Plans

Swot Analysis

Gantt Chart

Business Proposal

Marketing Plan

Project Management

Business Case

Business Model

Cyber Security

Business PPT

Digital Marketing

Digital Transformation

Human Resources

Product Management

Artificial Intelligence

Company Profile

Acknowledgement PPT

PPT Presentation

Reports Brochures

One Page Pitch

Interview PPT

All Categories

Top 10 Qualitative Research Report Templates with Samples and Examples

Top 10 Qualitative Research Report Templates with Samples and Examples

“Research is to see what everybody else has seen, and to think what nobody else has thought, ” said Hungarian biochemist and Nobel laureate Albert Szent-Gyorgyi, who discovered Vitamin C. This fabulous statement on research as a human endeavor reminds us that execution matters, of course, but the solid pillar of research that backs it is invaluable as well.

Here’s an example to illustrate this in action.

Have you ever wondered what makes Oprah Winfrey a successful businesswoman? It's her research abilities. Oprah might not have been as successful as a news anchor and television show host if she hadn't done her exploratory research on key topics and public figures. Additionally, without the research and development that went into the internet, there was no way that you could be reading this post right now. Research is an essential tool for understanding the intricacies of many topics and advancing knowledge.

Businesses in the modern world are, increasingly, based on research. Within research too, the qualitative world of non-numerical observations, data, and impactful insights is what business owners are most interested in. This is not to say that numbers or empirical research is not important. It is, of course, one of the founding blocks of business.

In this blog, however, we focus on qualitative research PPT Templates that help you move forward and get on the profitable highway and take the best decisions for your business.

These presentation templates are 100% customizable, and editable. Use these to leave a lasting impact on your audience and get recall for your business value offering.

Top 10 Qualitative Research Report Templates

The goal of qualitative research methods is to monitor market trends and attitudes through surveys, analyses, historical research, and open-ended interviews. It helps interpret and comprehend human behavior using data. With the use of qualitative market research services, you may get access to the appropriate data that could help you make decisions.

After finishing the research portion of your assignment effectively, you'll need a captivating way to present your findings to your audience. Here, SlideTeam's qualitative research report templates come in handy. Our top ten qualitative research templates will help you effectively communicate your message. Let’s start a tour of this universe.

Template 1 : Qualitative Research Proposal Template PowerPoint Presentation Slides

For the reader to understand your research proposal, you must have well-structured PPT slides. Don't worry, SlideTeam has you covered. Our pre-made research proposal template presentation slides have no learning curve. This implies that any user may rapidly create a powerful professional research proposal presentation using our PPT slides. Download these PowerPoint slides in a way that will convince your reviewers to accept your strategy.

qualitative research template powerpoint presentation slides wd

Download Now!

Template 2 : Qualitative Research Powerpoint PPT Template Bundles

You may have observed that some brands have taken the place of generic words for comparable products in our language.  Even though we are aware that Band-Aid is a brand, we always ask for Band-Aid whenever we require a plastic bandage. The power of branding is quite astounding. This is the benefit that our next PPT template bundles will provide for your business. Potential customers will find it simpler to recognize your brand and correctly associate it with a certain good or service because of our platform-independent PowerPoint Slides. Download now!

qualitative research powerpoint ppt template bundles

Template 3 : Qualitative Research Interviewing Presentation Deck

Do you find it hard to handle challenging conversations at work? Then, you may conduct effective interviews employing this PowerPoint presentation. Our presentation on qualitative research interviews aimed to "give voice" to the subjects. It provides details on interviews, information, research, participant, and study methodologies. Download this PowerPoint Presentation if you need to introduce yourself effectively during a quick visual communication.

qualitative research interviewing presentation deck wd

Template 4 : Thematic Analysis Qualitative Research PPT PowerPoint Presentation Outline Rules CPB

Thematic analysis is a technique used in qualitative research to arrive at  hidden patterns and other inferences based on a theme. Any research can employ our Thematic analysis qualitative research PPT. By using all the features of this adaptable PPT, you may convey information well. By including the proper icons and symbols, this presentation can be improved as an instructional tool and opened on any platform. Download now!

thematic analysis qualitative research ppt powerpoint presentation outline rules cpb

Template 5 : Comparative Analysis of Qualitative Research Methods

Conducting a successful comparison analysis is essential if you or your company wants to make sure that your decision-making process is efficient. With the help of our comparative analysis of qualitative research techniques, you can make choices that work for both your company and your clients. Focus Group Interviews, Cognitive Mapping, Critical Incident Technique, Verbal Protocol, Data Collection, Data Analysis, Research Scope, and Objective are covered in this extensive series of slides. Download today to carry out efficient business operations.

comparative analysis of qualitative research methods wd

Template 6 : Five-Type of Qualitative Research Designs

Your business can achieve significant results with the help of our five  qualitative research design types. Given that it incorporates layers of case studies, phenomenology, historical studies, and action research, it qualifies as a full-fledged presentation. Download this presentation template to perform an objective, open-ended technique and to carefully consider probable sources of errors.

5 types of qualitative research designs wd 5

Template 7 : Key Phases for the Qualitative Research Process

Any attempt at qualitative research, no matter how small, must follow the prescribed procedures. The key stages of the qualitative research method are combined in this pre-made PPT template. This set of slides covers data analysis, research approach, research design, research aim, issue description, research questions, philosophical assumptions, data collecting, and result interpretation. Get it now.

key phases for qualitative research process wd

Template 8 : Thematic Analysis of Qualitative Research Data

Thematic analysis is performed on the raw data that is acquired through focus groups, interviews, surveys, etc. We go over each and every critical step in our slides on thematic analysis of qualitative research data, including how to uncover codes, identify themes in the data, finalize topics, explore each theme, and analyze documents. This completely editable PowerPoint presentation is available for instant download.

thematic analysis of qualitative research data wd

Template 9 : Swot Analysis of Qualitative Research Approach

Use this PowerPoint set to determine the strengths, weaknesses, opportunities, and threats facing your company. Each slide comes with a unique tool that may be utilized to strengthen your areas of weakness, grasp opportunities, and lessen risks. This template can be used to collect statistics, add your own information, and then begin considering how you might get better.

swot analysis of qualitative research approach wd

Download now!

Template 10 : Qualitative Research through Graph Showing Revenue Growth

A picture truly is worth a thousand words even when it comes to summarizing your research's findings. Researchers encounter an unavoidable issue when presenting qualitative study data; to address this challenge, Slide Team has created a user-responsive Graph Showing Revenue Growth template. This slideshow graph could help you make informed decisions and encourage your company's growth.

qualitative research through graph showing growth wd

Template 11 : Qualitative Research Data Collection Approaches and Implications

Like blood moving through the circulatory system, data moves through an organization. Businesses cannot run without data. The first step in making better decisions is gathering data. This presentation template includes all the elements necessary to create a successful business plan, from data collection to analysis of the best method to comprehend concepts, opinions, or experiences. Get it now.

qualitative research data collection approaches and implications wd

Template 12 : Qualitative Research Analysis of Comments with Magnify Glass

The first step in performing a qualitative analysis of your data is gathering all the comments and feedback you want to look at. Our templates help you document those comments. These slides are fully editable and contain a visual accessibility function. The organization and formatting of the sections are excellent. Download it now.

qualitative research analysis of comment with magnify glass wd

PS For more information on qualitative and quantitative data analysis, as well as to determine which type of market research is best for your company, check out this blog.

FAQs on Qualitative Research 

Writing a qualitative research report.

A qualitative report is a summary of an experience, activity, event, or observation. The format of a qualitative report includes an abstract, introduction, background information on the issue, the researcher's role, theoretical viewpoint, methodology, ethical considerations, results, data analysis, limitations, discussion, conclusions, implications, references, and an appendix. A qualitative research report requires extensive detail and is typically divided into several sections. These start with the title, a table of contents, and an abstract; these form the beginning. Then, the meat of a qualitative report comprises an introduction, the literature review, an account of investigation, findings, discussion, and conclusions. The final section is references.

How do you Report Data in Qualitative Research?

A qualitative research report is frequently built around themes. You should be aware that it can be difficult to express qualitative findings as thoroughly as they deserve. It is customary to use direct quotes from sources like interviews to support the viewpoint. To develop a precise description or explanation of the primary theme being studied, it is also crucial to clarify concepts and connect them. There is the need to state about design, which is how were the subject choices made, leading through other steps to documenting that how the researcher verified the research’s findings/results.

What is an Example of a Report of Qualitative Data?

Qualitative data are categorical by nature. Reports that use qualitative data make it easier to present complex information. The semi-structured interview is one of the best illustrations of a qualitative data collection technique that provides open-ended responses from informants while allowing researchers to ask questions based on a set of predetermined themes. Since they enable both inductive and deductive evaluative reasoning, these are crucial tools for qualitative research.

How do you write an Introduction for a Qualitative Report?

A qualitative report must have a strong introduction. In this section, the researcher emphasizes the aims and objectives of the methodical study. It also addresses the problem that the systematic study aims to solve. In this section, it's imperative to state whether the research's goals were met. The researcher goes into further depth about the research problem in the introduction part and discusses the need for a methodical enquiry. The researcher must define any technical words or phrases used.

Related posts:

  • [Updated 2023] Report Writing Format with Sample Report Templates
  • Top 10 Academic Report and Document Templates
  • 10 Best PowerPoint Templates for Non-Profit Organizations
  • Top 10 Proposal Executive Summary Templates With Samples And Examples

Liked this blog? Please recommend us

research methods report example

Top 10 Business Investment Proposal Templates With Samples and Examples (Free PDF Attached)

Top 10 Marketing Cover Letter Templates With Samples and Examples (Free PDF Attached)

Top 10 Marketing Cover Letter Templates With Samples and Examples (Free PDF Attached)

This form is protected by reCAPTCHA - the Google Privacy Policy and Terms of Service apply.

digital_revolution_powerpoint_presentation_slides_Slide01

--> Digital revolution powerpoint presentation slides

sales_funnel_results_presentation_layouts_Slide01

--> Sales funnel results presentation layouts

3d_men_joinning_circular_jigsaw_puzzles_ppt_graphics_icons_Slide01

--> 3d men joinning circular jigsaw puzzles ppt graphics icons

Business Strategic Planning Template For Organizations Powerpoint Presentation Slides

--> Business Strategic Planning Template For Organizations Powerpoint Presentation Slides

Future plan powerpoint template slide

--> Future plan powerpoint template slide

project_management_team_powerpoint_presentation_slides_Slide01

--> Project Management Team Powerpoint Presentation Slides

Brand marketing powerpoint presentation slides

--> Brand marketing powerpoint presentation slides

Launching a new service powerpoint presentation with slides go to market

--> Launching a new service powerpoint presentation with slides go to market

agenda_powerpoint_slide_show_Slide01

--> Agenda powerpoint slide show

Four key metrics donut chart with percentage

--> Four key metrics donut chart with percentage

Engineering and technology ppt inspiration example introduction continuous process improvement

--> Engineering and technology ppt inspiration example introduction continuous process improvement

Meet our team representing in circular format

--> Meet our team representing in circular format

Google Reviews

helpful professor logo

15 Research Methodology Examples

15 Research Methodology Examples

Tio Gabunia (B.Arch, M.Arch)

Tio Gabunia is an academic writer and architect based in Tbilisi. He has studied architecture, design, and urban planning at the Georgian Technical University and the University of Lisbon. He has worked in these fields in Georgia, Portugal, and France. Most of Tio’s writings concern philosophy. Other writings include architecture, sociology, urban planning, and economics.

Learn about our Editorial Process

15 Research Methodology Examples

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

research methods report example

Research methodologies can roughly be categorized into three group: quantitative, qualitative, and mixed-methods.

  • Qualitative Research : This methodology is based on obtaining deep, contextualized, non-numerical data. It can occur, for example, through open-ended questioning of research particiapnts in order to understand human behavior. It’s all about describing and analyzing subjective phenomena such as emotions or experiences.
  • Quantitative Research: This methodology is rationally-based and relies heavily on numerical analysis of empirical data . With quantitative research, you aim for objectivity by creating hypotheses and testing them through experiments or surveys, which allow for statistical analyses.
  • Mixed-Methods Research: Mixed-methods research combines both previous types into one project. We have more flexibility when designing our research study with mixed methods since we can use multiple approaches depending on our needs at each time. Using mixed methods can help us validate our results and offer greater predictability than just either type of methodology alone could provide.

Below are research methodologies that fit into each category.

chris

Qualitative Research Methodologies

1. case study.

Conducts an in-depth examination of a specific case, individual, or event to understand a phenomenon.

Instead of examining a whole population for numerical trend data, case study researchers seek in-depth explanations of one event.

The benefit of case study research is its ability to elucidate overlooked details of interesting cases of a phenomenon (Busetto, Wick & Gumbinger, 2020). It offers deep insights for empathetic, reflective, and thoughtful understandings of that phenomenon.

However, case study findings aren’t transferrable to new contexts or for population-wide predictions. Instead, they inform practitioner understandings for nuanced, deep approaches to future instances (Liamputtong, 2020).

2. Grounded Theory

Grounded theory involves generating hypotheses and theories through the collection and interpretation of data (Faggiolani, n.d.). Its distinguishing features is that it doesn’t test a hypothesis generated prior to analysis, but rather generates a hypothesis or ‘theory’ that emerges from the data.

It also involves the application of inductive reasoning and is often contrasted with the hypothetico-deductive model of scientific research. This research methodology was developed by Barney Glaser and Anselm Strauss in the 1960s (Glaser & Strauss, 2009). 

The basic difference between traditional scientific approaches to research and grounded theory is that the latter begins with a question, then collects data, and the theoretical framework is said to emerge later from this data.

By contrast, scientists usually begin with an existing theoretical framework , develop hypotheses, and only then start collecting data to verify or falsify the hypotheses.

3. Ethnography

In ethnographic research , the researcher immerses themselves within the group they are studying, often for long periods of time.

This type of research aims to understand the shared beliefs, practices, and values of a particular community by immersing the researcher within the cultural group.

Although ethnographic research cannot predict or identify trends in an entire population, it can create detailed explanations of cultural practices and comparisons between social and cultural groups.

When a person conducts an ethnographic study of themselves or their own culture, it can be considered autoethnography .

Its strength lies in producing comprehensive accounts of groups of people and their interactions.

Common methods researchers use during an ethnographic study include participant observation , thick description, unstructured interviews, and field notes vignettes. These methods can provide detailed and contextualized descriptions of their subjects.

Example Study

Liquidated: An Ethnography of Wall Street by Karen Ho involves an anthropologist who embeds herself with Wall Street firms to study the culture of Wall Street bankers and how this culture affects the broader economy and world.

4. Phenomenology

Phenomenology to understand and describe individuals’ lived experiences concerning a specific phenomenon.

As a research methodology typically used in the social sciences , phenomenology involves the study of social reality as a product of intersubjectivity (the intersection of people’s cognitive perspectives) (Zahavi & Overgaard, n.d.).

This philosophical approach was first developed by Edmund Husserl.

5. Narrative Research

Narrative research explores personal stories and experiences to understand their meanings and interpretations.

It is also known as narrative inquiry and narrative analysis(Riessman, 1993).

This approach to research uses qualitative material like journals, field notes, letters, interviews, texts, photos, etc., as its data.

It is aimed at understanding the way people create meaning through narratives (Clandinin & Connelly, 2004).

6. Discourse Analysis

A discourse analysis examines the structure, patterns, and functions of language in context to understand how the text produces social constructs.

This methodology is common in critical theory , poststructuralism , and postmodernism. Its aim is to understand how language constructs discourses (roughly interpreted as “ways of thinking and constructing knowledge”).

As a qualitative methodology , its focus is on developing themes through close textual analysis rather than using numerical methods. Common methods for extracting data include semiotics and linguistic analysis.

7. Action Research

Action research involves researchers working collaboratively with stakeholders to address problems, develop interventions, and evaluate effectiveness.

Action research is a methodology and philosophy of research that is common in the social sciences.

The term was first coined in 1944 by Kurt Lewin, a German-American psychologist who also introduced applied research and group communication (Altrichter & Gstettner, 1993).

Lewin originally defined action research as involving two primary processes: taking action and doing research (Lewin, 1946).

Action research involves planning, action, and information-seeking about the result of the action.

Since Lewin’s original formulation, many different theoretical approaches to action research have been developed. These include action science, participatory action research, cooperative inquiry, and living educational theory among others.

Using Digital Sandbox Gaming to Improve Creativity Within Boys’ Writing (Ellison & Drew, 2019) is a study conducted by a school teacher who used video games to help teach his students English. It involved action research, where he interviewed his students to see if the use of games as stimuli for storytelling helped draw them into the learning experience, and iterated on his teaching style based on their feedback (disclaimer: I am the second author of this study).

See More: Examples of Qualitative Research

Quantitative Research Methodologies

8. experimental design.

As the name suggests, this type of research is based on testing hypotheses in experimental settings by manipulating variables and observing their effects on other variables.

The main benefit lies in its ability to manipulate specific variables to determine their effect on outcomes which is a great method for those looking for causational links in their research.

This is common, for example, in high-school science labs, where students are asked to introduce a variable into a setting in order to examine its effect.

9. Non-Experimental Design

Non-experimental design observes and measures associations between variables without manipulating them.

It can take, for example, the form of a ‘fly on the wall’ observation of a phenomenon, allowing researchers to examine authentic settings and changes that occur naturally in the environment.

10. Cross-Sectional Design

Cross-sectional design involves analyzing variables pertaining to a specific time period and at that exact moment.

This approach allows for an extensive examination and comparison of distinct and independent subjects, thereby offering advantages over qualitative methodologies such as case studies or surveys.

While cross-sectional design can be extremely useful in taking a ‘snapshot in time’, as a standalone method, it is not useful for examining changes in subjects after an intervention. The next methodology addresses this issue.

The prime example of this type of study is a census. A population census is mailed out to every house in the country, and each household must complete the census on the same evening. This allows the government to gather a snapshot of the nation’s demographics, beliefs, religion, and so on.

11. Longitudinal Design

Longitudinal research gathers data from the same subjects over an extended period to analyze changes and development.

In contrast to cross-sectional tactics, longitudinal designs examine variables more than once, over a pre-determined time span, allowing for multiple data points to be taken at different times.

A cross-sectional design is also useful for examining cohort effects , by comparing differences or changes in multiple different generations’ beliefs over time.

With multiple data points collected over extended periods ,it’s possible to examine continuous changes within things like population dynamics or consumer behavior. This makes detailed analysis of change possible.

12. Quasi-Experimental Design

Quasi-experimental design involves manipulating variables for analysis, but uses pre-existing groups of subjects rather than random groups.

Because the groups of research participants already exist, they cannot be randomly assigned to a cohort as with a true experimental design study. This makes inferring a causal relationship more difficult, but is nonetheless often more feasible in real-life settings.

Quasi-experimental designs are generally considered inferior to true experimental designs.

13. Correlational Research

Correlational research examines the relationships between two or more variables, determining the strength and direction of their association.

Similar to quasi-experimental methods, this type of research focuses on relationship differences between variables.

This approach provides a fast and easy way to make initial hypotheses based on either positive or negative correlation trends that can be observed within dataset.

Methods used for data analysis may include statistic correlations such as Pearson’s or Spearman’s.

Mixed-Methods Research Methodologies

14. sequential explanatory design (quan→qual).

This methodology involves conducting quantitative analysis first, then supplementing it with a qualitative study.

It begins by collecting quantitative data that is then analyzed to determine any significant patterns or trends.

Secondly, qualitative methods are employed. Their intent is to help interpret and expand the quantitative results.

This offers greater depth into understanding both large and smaller aspects of research questions being addressed.

The rationale behind this approach is to ensure that your data collection generates richer context for gaining insight into the particular issue across different levels, integrating in one study, qualitative exploration as well as statistical procedures.

15. Sequential Exploratory Design (QUAL→QUAN)

This methodology goes in the other direction, starting with qualitative analysis and ending with quantitative analysis.

It starts with qualitative research that delves deeps into complex areas and gathers rich information through interviewing or observing participants.

After this stage of exploration comes to an end, quantitative techniques are used to analyze the collected data through inferential statistics.

The idea is that a qualitative study can arm the researchers with a strong hypothesis testing framework, which they can then apply to a larger sample size using qualitative methods.

When I first took research classes, I had a lot of trouble distinguishing between methodologies and methods.

The key is to remember that the methodology sets the direction, while the methods are the specific tools to be used. A good analogy is transport: first you need to choose a mode (public transport, private transport, motorized transit, non-motorized transit), then you can choose a tool (bus, car, bike, on foot).

While research methodologies can be split into three types, each type has many different nuanced methodologies that can be chosen, before you then choose the methods – or tools – to use in the study. Each has its own strengths and weaknesses, so choose wisely!

Altrichter, H., & Gstettner, P. (1993). Action Research: A closed chapter in the history of German social science? Educational Action Research , 1 (3), 329–360. https://doi.org/10.1080/0965079930010302

Audi, R. (1999). The Cambridge dictionary of philosophy . Cambridge ; New York : Cambridge University Press. http://archive.org/details/cambridgediction00audi

Clandinin, D. J., & Connelly, F. M. (2004). Narrative Inquiry: Experience and Story in Qualitative Research . John Wiley & Sons.

Creswell, J. W. (2008). Educational Research: Planning, Conducting, and Evaluating Quantitative and Qualitative Research . Pearson/Merrill Prentice Hall.

Faggiolani, C. (n.d.). Perceived Identity: Applying Grounded Theory in Libraries . https://doi.org/10.4403/jlis.it-4592

Gauch, H. G. (2002). Scientific Method in Practice . Cambridge University Press.

Glaser, B. G., & Strauss, A. L. (2009). The Discovery of Grounded Theory: Strategies for Qualitative Research . Transaction Publishers.

Kothari, C. R. (2004). Research Methodology: Methods and Techniques . New Age International.

Kuada, J. (2012). Research Methodology: A Project Guide for University Students . Samfundslitteratur.

Lewin, K. (1946). Action research and minority problems. Journal of Social Issues , 2,  4 , 34–46. https://doi.org/10.1111/j.1540-4560.1946.tb02295.x

Mills, J., Bonner, A., & Francis, K. (2006). The Development of Constructivist Grounded Theory. International Journal of Qualitative Methods , 5 (1), 25–35. https://doi.org/10.1177/160940690600500103

Mingers, J., & Willcocks, L. (2017). An integrative semiotic methodology for IS research. Information and Organization , 27 (1), 17–36. https://doi.org/10.1016/j.infoandorg.2016.12.001

OECD. (2015). Frascati Manual 2015: Guidelines for Collecting and Reporting Data on Research and Experimental Development . Organisation for Economic Co-operation and Development. https://www.oecd-ilibrary.org/science-and-technology/frascati-manual-2015_9789264239012-en

Peirce, C. S. (1992). The Essential Peirce, Volume 1: Selected Philosophical Writings (1867–1893) . Indiana University Press.

Reese, W. L. (1980). Dictionary of Philosophy and Religion: Eastern and Western Thought . Humanities Press.

Riessman, C. K. (1993). Narrative analysis . Sage Publications, Inc.

Saussure, F. de, & Riedlinger, A. (1959). Course in General Linguistics . Philosophical Library.

Thomas, C. G. (2021). Research Methodology and Scientific Writing . Springer Nature.

Zahavi, D., & Overgaard, S. (n.d.). Phenomenological Sociology—The Subjectivity of Everyday Life .

Tio

  • Tio Gabunia (B.Arch, M.Arch) #molongui-disabled-link 6 Types of Societies (With 21 Examples)
  • Tio Gabunia (B.Arch, M.Arch) #molongui-disabled-link 25 Public Health Policy Examples
  • Tio Gabunia (B.Arch, M.Arch) #molongui-disabled-link 15 Cultural Differences Examples
  • Tio Gabunia (B.Arch, M.Arch) #molongui-disabled-link Social Interaction Types & Examples (Sociology)

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Number Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Word Games for Kids (Free and Easy)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 25 Outdoor Games for Kids
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 50 Incentives to Give to Students

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Quantitative research questionsQuantitative research hypotheses
Descriptive research questionsSimple hypothesis
Comparative research questionsComplex hypothesis
Relationship research questionsDirectional hypothesis
Non-directional hypothesis
Associative hypothesis
Causal hypothesis
Null hypothesis
Alternative hypothesis
Working hypothesis
Statistical hypothesis
Logical hypothesis
Hypothesis-testing
Qualitative research questionsQualitative research hypotheses
Contextual research questionsHypothesis-generating
Descriptive research questions
Evaluation research questions
Explanatory research questions
Exploratory research questions
Generative research questions
Ideological research questions
Ethnographic research questions
Phenomenological research questions
Grounded theory questions
Qualitative case study questions

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Quantitative research questions
Descriptive research question
- Measures responses of subjects to variables
- Presents variables to measure, analyze, or assess
What is the proportion of resident doctors in the hospital who have mastered ultrasonography (response of subjects to a variable) as a diagnostic technique in their clinical training?
Comparative research question
- Clarifies difference between one group with outcome variable and another group without outcome variable
Is there a difference in the reduction of lung metastasis in osteosarcoma patients who received the vitamin D adjunctive therapy (group with outcome variable) compared with osteosarcoma patients who did not receive the vitamin D adjunctive therapy (group without outcome variable)?
- Compares the effects of variables
How does the vitamin D analogue 22-Oxacalcitriol (variable 1) mimic the antiproliferative activity of 1,25-Dihydroxyvitamin D (variable 2) in osteosarcoma cells?
Relationship research question
- Defines trends, association, relationships, or interactions between dependent variable and independent variable
Is there a relationship between the number of medical student suicide (dependent variable) and the level of medical student stress (independent variable) in Japan during the first wave of the COVID-19 pandemic?

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Quantitative research hypotheses
Simple hypothesis
- Predicts relationship between single dependent variable and single independent variable
If the dose of the new medication (single independent variable) is high, blood pressure (single dependent variable) is lowered.
Complex hypothesis
- Foretells relationship between two or more independent and dependent variables
The higher the use of anticancer drugs, radiation therapy, and adjunctive agents (3 independent variables), the higher would be the survival rate (1 dependent variable).
Directional hypothesis
- Identifies study direction based on theory towards particular outcome to clarify relationship between variables
Privately funded research projects will have a larger international scope (study direction) than publicly funded research projects.
Non-directional hypothesis
- Nature of relationship between two variables or exact study direction is not identified
- Does not involve a theory
Women and men are different in terms of helpfulness. (Exact study direction is not identified)
Associative hypothesis
- Describes variable interdependency
- Change in one variable causes change in another variable
A larger number of people vaccinated against COVID-19 in the region (change in independent variable) will reduce the region’s incidence of COVID-19 infection (change in dependent variable).
Causal hypothesis
- An effect on dependent variable is predicted from manipulation of independent variable
A change into a high-fiber diet (independent variable) will reduce the blood sugar level (dependent variable) of the patient.
Null hypothesis
- A negative statement indicating no relationship or difference between 2 variables
There is no significant difference in the severity of pulmonary metastases between the new drug (variable 1) and the current drug (variable 2).
Alternative hypothesis
- Following a null hypothesis, an alternative hypothesis predicts a relationship between 2 study variables
The new drug (variable 1) is better on average in reducing the level of pain from pulmonary metastasis than the current drug (variable 2).
Working hypothesis
- A hypothesis that is initially accepted for further research to produce a feasible theory
Dairy cows fed with concentrates of different formulations will produce different amounts of milk.
Statistical hypothesis
- Assumption about the value of population parameter or relationship among several population characteristics
- Validity tested by a statistical experiment or analysis
The mean recovery rate from COVID-19 infection (value of population parameter) is not significantly different between population 1 and population 2.
There is a positive correlation between the level of stress at the workplace and the number of suicides (population characteristics) among working people in Japan.
Logical hypothesis
- Offers or proposes an explanation with limited or no extensive evidence
If healthcare workers provide more educational programs about contraception methods, the number of adolescent pregnancies will be less.
Hypothesis-testing (Quantitative hypothesis-testing research)
- Quantitative research uses deductive reasoning.
- This involves the formation of a hypothesis, collection of data in the investigation of the problem, analysis and use of the data from the investigation, and drawing of conclusions to validate or nullify the hypotheses.

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative research questions
Contextual research question
- Ask the nature of what already exists
- Individuals or groups function to further clarify and understand the natural context of real-world problems
What are the experiences of nurses working night shifts in healthcare during the COVID-19 pandemic? (natural context of real-world problems)
Descriptive research question
- Aims to describe a phenomenon
What are the different forms of disrespect and abuse (phenomenon) experienced by Tanzanian women when giving birth in healthcare facilities?
Evaluation research question
- Examines the effectiveness of existing practice or accepted frameworks
How effective are decision aids (effectiveness of existing practice) in helping decide whether to give birth at home or in a healthcare facility?
Explanatory research question
- Clarifies a previously studied phenomenon and explains why it occurs
Why is there an increase in teenage pregnancy (phenomenon) in Tanzania?
Exploratory research question
- Explores areas that have not been fully investigated to have a deeper understanding of the research problem
What factors affect the mental health of medical students (areas that have not yet been fully investigated) during the COVID-19 pandemic?
Generative research question
- Develops an in-depth understanding of people’s behavior by asking ‘how would’ or ‘what if’ to identify problems and find solutions
How would the extensive research experience of the behavior of new staff impact the success of the novel drug initiative?
Ideological research question
- Aims to advance specific ideas or ideologies of a position
Are Japanese nurses who volunteer in remote African hospitals able to promote humanized care of patients (specific ideas or ideologies) in the areas of safe patient environment, respect of patient privacy, and provision of accurate information related to health and care?
Ethnographic research question
- Clarifies peoples’ nature, activities, their interactions, and the outcomes of their actions in specific settings
What are the demographic characteristics, rehabilitative treatments, community interactions, and disease outcomes (nature, activities, their interactions, and the outcomes) of people in China who are suffering from pneumoconiosis?
Phenomenological research question
- Knows more about the phenomena that have impacted an individual
What are the lived experiences of parents who have been living with and caring for children with a diagnosis of autism? (phenomena that have impacted an individual)
Grounded theory question
- Focuses on social processes asking about what happens and how people interact, or uncovering social relationships and behaviors of groups
What are the problems that pregnant adolescents face in terms of social and cultural norms (social processes), and how can these be addressed?
Qualitative case study question
- Assesses a phenomenon using different sources of data to answer “why” and “how” questions
- Considers how the phenomenon is influenced by its contextual situation.
How does quitting work and assuming the role of a full-time mother (phenomenon assessed) change the lives of women in Japan?
Qualitative research hypotheses
Hypothesis-generating (Qualitative hypothesis-generating research)
- Qualitative research uses inductive reasoning.
- This involves data collection from study participants or the literature regarding a phenomenon of interest, using the collected data to develop a formal hypothesis, and using the formal hypothesis as a framework for testing the hypothesis.
- Qualitative exploratory studies explore areas deeper, clarifying subjective experience and allowing formulation of a formal hypothesis potentially testable in a future quantitative approach.

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

VariablesUnclear and weak statement (Statement 1) Clear and good statement (Statement 2) Points to avoid
Research questionWhich is more effective between smoke moxibustion and smokeless moxibustion?“Moreover, regarding smoke moxibustion versus smokeless moxibustion, it remains unclear which is more effective, safe, and acceptable to pregnant women, and whether there is any difference in the amount of heat generated.” 1) Vague and unfocused questions
2) Closed questions simply answerable by yes or no
3) Questions requiring a simple choice
HypothesisThe smoke moxibustion group will have higher cephalic presentation.“Hypothesis 1. The smoke moxibustion stick group (SM group) and smokeless moxibustion stick group (-SLM group) will have higher rates of cephalic presentation after treatment than the control group.1) Unverifiable hypotheses
Hypothesis 2. The SM group and SLM group will have higher rates of cephalic presentation at birth than the control group.2) Incompletely stated groups of comparison
Hypothesis 3. There will be no significant differences in the well-being of the mother and child among the three groups in terms of the following outcomes: premature birth, premature rupture of membranes (PROM) at < 37 weeks, Apgar score < 7 at 5 min, umbilical cord blood pH < 7.1, admission to neonatal intensive care unit (NICU), and intrauterine fetal death.” 3) Insufficiently described variables or outcomes
Research objectiveTo determine which is more effective between smoke moxibustion and smokeless moxibustion.“The specific aims of this pilot study were (a) to compare the effects of smoke moxibustion and smokeless moxibustion treatments with the control group as a possible supplement to ECV for converting breech presentation to cephalic presentation and increasing adherence to the newly obtained cephalic position, and (b) to assess the effects of these treatments on the well-being of the mother and child.” 1) Poor understanding of the research question and hypotheses
2) Insufficient description of population, variables, or study outcomes

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

VariablesUnclear and weak statement (Statement 1)Clear and good statement (Statement 2)Points to avoid
Research questionDoes disrespect and abuse (D&A) occur in childbirth in Tanzania?How does disrespect and abuse (D&A) occur and what are the types of physical and psychological abuses observed in midwives’ actual care during facility-based childbirth in urban Tanzania?1) Ambiguous or oversimplistic questions
2) Questions unverifiable by data collection and analysis
HypothesisDisrespect and abuse (D&A) occur in childbirth in Tanzania.Hypothesis 1: Several types of physical and psychological abuse by midwives in actual care occur during facility-based childbirth in urban Tanzania.1) Statements simply expressing facts
Hypothesis 2: Weak nursing and midwifery management contribute to the D&A of women during facility-based childbirth in urban Tanzania.2) Insufficiently described concepts or variables
Research objectiveTo describe disrespect and abuse (D&A) in childbirth in Tanzania.“This study aimed to describe from actual observations the respectful and disrespectful care received by women from midwives during their labor period in two hospitals in urban Tanzania.” 1) Statements unrelated to the research question and hypotheses
2) Unattainable or unexplorable objectives

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.
  • Privacy Policy

Research Method

Home » Case Study – Methods, Examples and Guide

Case Study – Methods, Examples and Guide

Table of Contents

Case Study Research

A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation.

It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied. Case studies typically involve multiple sources of data, including interviews, observations, documents, and artifacts, which are analyzed using various techniques, such as content analysis, thematic analysis, and grounded theory. The findings of a case study are often used to develop theories, inform policy or practice, or generate new research questions.

Types of Case Study

Types and Methods of Case Study are as follows:

Single-Case Study

A single-case study is an in-depth analysis of a single case. This type of case study is useful when the researcher wants to understand a specific phenomenon in detail.

For Example , A researcher might conduct a single-case study on a particular individual to understand their experiences with a particular health condition or a specific organization to explore their management practices. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a single-case study are often used to generate new research questions, develop theories, or inform policy or practice.

Multiple-Case Study

A multiple-case study involves the analysis of several cases that are similar in nature. This type of case study is useful when the researcher wants to identify similarities and differences between the cases.

For Example, a researcher might conduct a multiple-case study on several companies to explore the factors that contribute to their success or failure. The researcher collects data from each case, compares and contrasts the findings, and uses various techniques to analyze the data, such as comparative analysis or pattern-matching. The findings of a multiple-case study can be used to develop theories, inform policy or practice, or generate new research questions.

Exploratory Case Study

An exploratory case study is used to explore a new or understudied phenomenon. This type of case study is useful when the researcher wants to generate hypotheses or theories about the phenomenon.

For Example, a researcher might conduct an exploratory case study on a new technology to understand its potential impact on society. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as grounded theory or content analysis. The findings of an exploratory case study can be used to generate new research questions, develop theories, or inform policy or practice.

Descriptive Case Study

A descriptive case study is used to describe a particular phenomenon in detail. This type of case study is useful when the researcher wants to provide a comprehensive account of the phenomenon.

For Example, a researcher might conduct a descriptive case study on a particular community to understand its social and economic characteristics. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a descriptive case study can be used to inform policy or practice or generate new research questions.

Instrumental Case Study

An instrumental case study is used to understand a particular phenomenon that is instrumental in achieving a particular goal. This type of case study is useful when the researcher wants to understand the role of the phenomenon in achieving the goal.

For Example, a researcher might conduct an instrumental case study on a particular policy to understand its impact on achieving a particular goal, such as reducing poverty. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of an instrumental case study can be used to inform policy or practice or generate new research questions.

Case Study Data Collection Methods

Here are some common data collection methods for case studies:

Interviews involve asking questions to individuals who have knowledge or experience relevant to the case study. Interviews can be structured (where the same questions are asked to all participants) or unstructured (where the interviewer follows up on the responses with further questions). Interviews can be conducted in person, over the phone, or through video conferencing.

Observations

Observations involve watching and recording the behavior and activities of individuals or groups relevant to the case study. Observations can be participant (where the researcher actively participates in the activities) or non-participant (where the researcher observes from a distance). Observations can be recorded using notes, audio or video recordings, or photographs.

Documents can be used as a source of information for case studies. Documents can include reports, memos, emails, letters, and other written materials related to the case study. Documents can be collected from the case study participants or from public sources.

Surveys involve asking a set of questions to a sample of individuals relevant to the case study. Surveys can be administered in person, over the phone, through mail or email, or online. Surveys can be used to gather information on attitudes, opinions, or behaviors related to the case study.

Artifacts are physical objects relevant to the case study. Artifacts can include tools, equipment, products, or other objects that provide insights into the case study phenomenon.

How to conduct Case Study Research

Conducting a case study research involves several steps that need to be followed to ensure the quality and rigor of the study. Here are the steps to conduct case study research:

  • Define the research questions: The first step in conducting a case study research is to define the research questions. The research questions should be specific, measurable, and relevant to the case study phenomenon under investigation.
  • Select the case: The next step is to select the case or cases to be studied. The case should be relevant to the research questions and should provide rich and diverse data that can be used to answer the research questions.
  • Collect data: Data can be collected using various methods, such as interviews, observations, documents, surveys, and artifacts. The data collection method should be selected based on the research questions and the nature of the case study phenomenon.
  • Analyze the data: The data collected from the case study should be analyzed using various techniques, such as content analysis, thematic analysis, or grounded theory. The analysis should be guided by the research questions and should aim to provide insights and conclusions relevant to the research questions.
  • Draw conclusions: The conclusions drawn from the case study should be based on the data analysis and should be relevant to the research questions. The conclusions should be supported by evidence and should be clearly stated.
  • Validate the findings: The findings of the case study should be validated by reviewing the data and the analysis with participants or other experts in the field. This helps to ensure the validity and reliability of the findings.
  • Write the report: The final step is to write the report of the case study research. The report should provide a clear description of the case study phenomenon, the research questions, the data collection methods, the data analysis, the findings, and the conclusions. The report should be written in a clear and concise manner and should follow the guidelines for academic writing.

Examples of Case Study

Here are some examples of case study research:

  • The Hawthorne Studies : Conducted between 1924 and 1932, the Hawthorne Studies were a series of case studies conducted by Elton Mayo and his colleagues to examine the impact of work environment on employee productivity. The studies were conducted at the Hawthorne Works plant of the Western Electric Company in Chicago and included interviews, observations, and experiments.
  • The Stanford Prison Experiment: Conducted in 1971, the Stanford Prison Experiment was a case study conducted by Philip Zimbardo to examine the psychological effects of power and authority. The study involved simulating a prison environment and assigning participants to the role of guards or prisoners. The study was controversial due to the ethical issues it raised.
  • The Challenger Disaster: The Challenger Disaster was a case study conducted to examine the causes of the Space Shuttle Challenger explosion in 1986. The study included interviews, observations, and analysis of data to identify the technical, organizational, and cultural factors that contributed to the disaster.
  • The Enron Scandal: The Enron Scandal was a case study conducted to examine the causes of the Enron Corporation’s bankruptcy in 2001. The study included interviews, analysis of financial data, and review of documents to identify the accounting practices, corporate culture, and ethical issues that led to the company’s downfall.
  • The Fukushima Nuclear Disaster : The Fukushima Nuclear Disaster was a case study conducted to examine the causes of the nuclear accident that occurred at the Fukushima Daiichi Nuclear Power Plant in Japan in 2011. The study included interviews, analysis of data, and review of documents to identify the technical, organizational, and cultural factors that contributed to the disaster.

Application of Case Study

Case studies have a wide range of applications across various fields and industries. Here are some examples:

Business and Management

Case studies are widely used in business and management to examine real-life situations and develop problem-solving skills. Case studies can help students and professionals to develop a deep understanding of business concepts, theories, and best practices.

Case studies are used in healthcare to examine patient care, treatment options, and outcomes. Case studies can help healthcare professionals to develop critical thinking skills, diagnose complex medical conditions, and develop effective treatment plans.

Case studies are used in education to examine teaching and learning practices. Case studies can help educators to develop effective teaching strategies, evaluate student progress, and identify areas for improvement.

Social Sciences

Case studies are widely used in social sciences to examine human behavior, social phenomena, and cultural practices. Case studies can help researchers to develop theories, test hypotheses, and gain insights into complex social issues.

Law and Ethics

Case studies are used in law and ethics to examine legal and ethical dilemmas. Case studies can help lawyers, policymakers, and ethical professionals to develop critical thinking skills, analyze complex cases, and make informed decisions.

Purpose of Case Study

The purpose of a case study is to provide a detailed analysis of a specific phenomenon, issue, or problem in its real-life context. A case study is a qualitative research method that involves the in-depth exploration and analysis of a particular case, which can be an individual, group, organization, event, or community.

The primary purpose of a case study is to generate a comprehensive and nuanced understanding of the case, including its history, context, and dynamics. Case studies can help researchers to identify and examine the underlying factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and detailed understanding of the case, which can inform future research, practice, or policy.

Case studies can also serve other purposes, including:

  • Illustrating a theory or concept: Case studies can be used to illustrate and explain theoretical concepts and frameworks, providing concrete examples of how they can be applied in real-life situations.
  • Developing hypotheses: Case studies can help to generate hypotheses about the causal relationships between different factors and outcomes, which can be tested through further research.
  • Providing insight into complex issues: Case studies can provide insights into complex and multifaceted issues, which may be difficult to understand through other research methods.
  • Informing practice or policy: Case studies can be used to inform practice or policy by identifying best practices, lessons learned, or areas for improvement.

Advantages of Case Study Research

There are several advantages of case study research, including:

  • In-depth exploration: Case study research allows for a detailed exploration and analysis of a specific phenomenon, issue, or problem in its real-life context. This can provide a comprehensive understanding of the case and its dynamics, which may not be possible through other research methods.
  • Rich data: Case study research can generate rich and detailed data, including qualitative data such as interviews, observations, and documents. This can provide a nuanced understanding of the case and its complexity.
  • Holistic perspective: Case study research allows for a holistic perspective of the case, taking into account the various factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and comprehensive understanding of the case.
  • Theory development: Case study research can help to develop and refine theories and concepts by providing empirical evidence and concrete examples of how they can be applied in real-life situations.
  • Practical application: Case study research can inform practice or policy by identifying best practices, lessons learned, or areas for improvement.
  • Contextualization: Case study research takes into account the specific context in which the case is situated, which can help to understand how the case is influenced by the social, cultural, and historical factors of its environment.

Limitations of Case Study Research

There are several limitations of case study research, including:

  • Limited generalizability : Case studies are typically focused on a single case or a small number of cases, which limits the generalizability of the findings. The unique characteristics of the case may not be applicable to other contexts or populations, which may limit the external validity of the research.
  • Biased sampling: Case studies may rely on purposive or convenience sampling, which can introduce bias into the sample selection process. This may limit the representativeness of the sample and the generalizability of the findings.
  • Subjectivity: Case studies rely on the interpretation of the researcher, which can introduce subjectivity into the analysis. The researcher’s own biases, assumptions, and perspectives may influence the findings, which may limit the objectivity of the research.
  • Limited control: Case studies are typically conducted in naturalistic settings, which limits the control that the researcher has over the environment and the variables being studied. This may limit the ability to establish causal relationships between variables.
  • Time-consuming: Case studies can be time-consuming to conduct, as they typically involve a detailed exploration and analysis of a specific case. This may limit the feasibility of conducting multiple case studies or conducting case studies in a timely manner.
  • Resource-intensive: Case studies may require significant resources, including time, funding, and expertise. This may limit the ability of researchers to conduct case studies in resource-constrained settings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Exploratory Research

Exploratory Research – Types, Methods and...

Mixed Research methods

Mixed Methods Research – Types & Analysis

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Quasi-Experimental Design

Quasi-Experimental Research Design – Types...

Qualitative Research Methods

Qualitative Research Methods

Popular searches

  • How to Get Participants For Your Study
  • How to Do Segmentation?
  • Conjoint Preference Share Simulator
  • MaxDiff Analysis
  • Likert Scales
  • Reliability & Validity

Request consultation

Do you need support in running a pricing or product study? We can help you with agile consumer research and conjoint analysis.

Looking for an online survey platform?

Conjointly offers a great survey tool with multiple question types, randomisation blocks, and multilingual support. The Basic tier is always free.

Research Methods Knowledge Base

  • Navigating the Knowledge Base
  • Foundations
  • Measurement
  • Research Design
  • Key Elements

Sample Paper

  • Table of Contents

Fully-functional online survey tool with various question types, logic, randomisation, and reporting for unlimited number of surveys.

Completely free for academics and students .

This paper should be used only as an example of a research paper write-up. Horizontal rules signify the top and bottom edges of pages. For sample references which are not included with this paper, you should consult the Publication Manual of the American Psychological Association, 4th Edition .

This paper is provided only to give you an idea of what a research paper might look like. You are not allowed to copy any of the text of this paper in writing your own report.

Because word processor copies of papers don’t translate well into web pages, you should note that an actual paper should be formatted according to the formatting rules for your context. Note especially that there are three formatting rules you will see in this sample paper which you should NOT follow. First, except for the title page, the running header should appear in the upper right corner of every page with the page number below it. Second, paragraphs and text should be double spaced and the start of each paragraph should be indented. Third, horizontal lines are used to indicate a mandatory page break and should not be used in your paper.

The Effects of a Supported Employment Program on Psychosocial Indicators for Persons with Severe Mental Illness William M.K. Trochim Cornell University

Running Head: SUPPORTED EMPLOYMENT

This paper describes the psychosocial effects of a program of supported employment (SE) for persons with severe mental illness. The SE program involves extended individualized supported employment for clients through a Mobile Job Support Worker (MJSW) who maintains contact with the client after job placement and supports the client in a variety of ways. A 50% simple random sample was taken of all persons who entered the Thresholds Agency between 3/1/93 and 2/28/95 and who met study criteria. The resulting 484 cases were randomly assigned to either the SE condition (treatment group) or the usual protocol (control group) which consisted of life skills training and employment in an in-house sheltered workshop setting. All participants were measured at intake and at 3 months after beginning employment, on two measures of psychological functioning (the BPRS and GAS) and two measures of self esteem (RSE and ESE). Significant treatment effects were found on all four measures, but they were in the opposite direction from what was hypothesized. Instead of functioning better and having more self esteem, persons in SE had lower functioning levels and lower self esteem. The most likely explanation is that people who work in low-paying service jobs in real world settings generally do not like them and experience significant job stress, whether they have severe mental illness or not. The implications for theory in psychosocial rehabilitation are considered.

The Effects of a Supported Employment Program on Psychosocial Indicators for Persons with Severe Mental Illness

Over the past quarter century a shift has occurred from traditional institution-based models of care for persons with severe mental illness (SMI) to more individualized community-based treatments. Along with this, there has been a significant shift in thought about the potential for persons with SMI to be “rehabilitated” toward lifestyles that more closely approximate those of persons without such illness. A central issue is the ability of a person to hold a regular full-time job for a sustained period of time. There have been several attempts to develop novel and radical models for program interventions designed to assist persons with SMI to sustain full-time employment while living in the community. The most promising of these have emerged from the tradition of psychiatric rehabilitation with its emphases on individual consumer goal setting, skills training, job preparation and employment support (Cook, Jonikas and Solomon, 1992). These are relatively new and field evaluations are rare or have only recently been initiated (Cook and Razzano, 1992; Cook, 1992). Most of the early attempts to evaluate such programs have naturally focused almost exclusively on employment outcomes. However, theory suggests that sustained employment and living in the community may have important therapeutic benefits in addition to the obvious economic ones. To date, there have been no formal studies of the effects of psychiatric rehabilitation programs on key illness-related outcomes. To address this issue, this study seeks to examine the effects of a new program of supported employment on psychosocial outcomes for persons with SMI.

Over the past several decades, the theory of vocational rehabilitation has experienced two major stages of evolution. Original models of vocational rehabilitation were based on the idea of sheltered workshop employment. Clients were paid a piece rate and worked only with other individuals who were disabled. Sheltered workshops tended to be “end points” for persons with severe and profound mental retardation since few ever moved from sheltered to competitive employment (Woest, Klein & Atkins, 1986). Controlled studies of sheltered workshop performance of persons with mental illness suggested only minimal success (Griffiths, 1974) and other research indicated that persons with mental illness earned lower wages, presented more behavior problems, and showed poorer workshop attendance than workers with other disabilities (Whitehead, 1977; Ciardiello, 1981).

In the 1980s, a new model of services called Supported Employment (SE) was proposed as less expensive and more normalizing for persons undergoing rehabilitation (Wehman, 1985). The SE model emphasizes first locating a job in an integrated setting for minimum wage or above, and then placing the person on the job and providing the training and support services needed to remain employed (Wehman, 1985). Services such as individualized job development, one-on-one job coaching, advocacy with co-workers and employers, and “fading” support were found to be effective in maintaining employment for individuals with severe and profound mental retardation (Revell, Wehman & Arnold, 1984). The idea that this model could be generalized to persons with all types of severe disabilities, including severe mental illness, became commonly accepted (Chadsey-Rusch & Rusch, 1986).

One of the more notable SE programs was developed at Thresholds, the site for the present study, which created a new staff position called the mobile job support worker (MJSW) and removed the common six month time limit for many placements. MJSWs provide ongoing, mobile support and intervention at or near the work site, even for jobs with high degrees of independence (Cook & Hoffschmidt, 1993). Time limits for many placements were removed so that clients could stay on as permanent employees if they and their employers wished. The suspension of time limits on job placements, along with MJSW support, became the basis of SE services delivered at Thresholds.

There are two key psychosocial outcome constructs of interest in this study. The first is the overall psychological functioning of the person with SMI. This would include the specification of severity of cognitive and affective symptomotology as well as the overall level of psychological functioning. The second is the level of self-reported self esteem of the person. This was measured both generally and with specific reference to employment.

The key hypothesis of this study is:

  • HO: A program of supported employment will result in either no change or negative effects on psychological functioning and self esteem.

which will be tested against the alternative:

  • HA: A program of supported employment will lead to positive effects on psychological functioning and self esteem.

The population of interest for this study is all adults with SMI residing in the U.S. in the early 1990s. The population that is accessible to this study consists of all persons who were clients of the Thresholds Agency in Chicago, Illinois between the dates of March 1, 1993 and February 28, 1995 who met the following criteria: 1) a history of severe mental illness (e.g. either schizophrenia, severe depression or manic-depression); 2) a willingness to achieve paid employment; 3) their primary diagnosis must not include chronic alcoholism or hard drug use; and 4) they must be 18 years of age or older. The sampling frame was obtained from records of the agency. Because of the large number of clients who pass through the agency each year (e.g. approximately 500 who meet the criteria) a simple random sample of 50% was chosen for inclusion in the study. This resulted in a sample size of 484 persons over the two-year course of the study.

On average, study participants were 30 years old and high school graduates (average education level = 13 years). The majority of participants (70%) were male. Most had never married (85%), few (2%) were currently married, and the remainder had been formerly married (13%). Just over half (51%) are African American, with the remainder Caucasian (43%) or other minority groups (6%). In terms of illness history, the members in the sample averaged 4 prior psychiatric hospitalizations and spent a lifetime average of 9 months as patients in psychiatric hospitals. The primary diagnoses were schizophrenia (42%) and severe chronic depression (37%). Participants had spent an average of almost two and one-half years (29 months) at the longest job they ever held.

While the study sample cannot be considered representative of the original population of interest, generalizability was not a primary goal – the major purpose of this study was to determine whether a specific SE program could work in an accessible context. Any effects of SE evident in this study can be generalized to urban psychiatric agencies that are similar to Thresholds, have a similar clientele, and implement a similar program.

All but one of the measures used in this study are well-known instruments in the research literature on psychosocial functioning. All of the instruments were administered as part of a structured interview that an evaluation social worker had with study participants at regular intervals.

Two measures of psychological functioning were used. The Brief Psychiatric Rating Scale (BPRS)(Overall and Gorham, 1962) is an 18-item scale that measures perceived severity of symptoms ranging from “somatic concern” and “anxiety” to “depressive mood” and “disorientation.” Ratings are given on a 0-to-6 Likert-type response scale where 0=“not present” and 6=“extremely severe” and the scale score is simply the sum of the 18 items. The Global Assessment Scale (GAS)(Endicott et al, 1976) is a single 1-to-100 rating on a scale where each ten-point increment has a detailed description of functioning (higher scores indicate better functioning). For instance, one would give a rating between 91-100 if the person showed “no symptoms, superior functioning…” and a value between 1-10 if the person “needs constant supervision…”

Two measures of self esteem were used. The first is the Rosenberg Self Esteem (RSE) Scale (Rosenberg, 1965), a 10-item scale rated on a 6-point response format where 1=“strongly disagree” and 6=“strongly agree” and there is no neutral point. The total score is simply the sum across the ten items, with five of the items being reversals. The second measure was developed explicitly for this study and was designed to measure the Employment Self Esteem (ESE) of a person with SMI. This is a 10-item scale that uses a 4-point response format where 1=“strongly disagree” and 4=“strongly agree” and there is no neutral point. The final ten items were selected from a pool of 97 original candidate items, based upon high item-total score correlations and a judgment of face validity by a panel of three psychologists. This instrument was deliberately kept simple – a shorter response scale and no reversal items – because of the difficulties associated with measuring a population with SMI. The entire instrument is provided in Appendix A.

All four of the measures evidenced strong reliability and validity. Internal consistency reliability estimates using Cronbach’s alpha ranged from .76 for ESE to .88 for SE. Test-retest reliabilities were nearly as high, ranging from .72 for ESE to .83 for the BPRS. Convergent validity was evidenced by the correlations within construct. For the two psychological functioning scales the correlation was .68 while for the self esteem measures it was somewhat lower at .57. Discriminant validity was examined by looking at the cross-construct correlations which ranged from .18 (BPRS-ESE) to .41 (GAS-SE).

A pretest-posttest two-group randomized experimental design was used in this study. In notational form, the design can be depicted as:

  • R = the groups were randomly assigned
  • O = the four measures (i.e. BPRS, GAS, RSE, and ESE)
  • X = supported employment

The comparison group received the standard Thresholds protocol which emphasized in-house training in life skills and employment in an in-house sheltered workshop. All participants were measured at intake (pretest) and at three months after intake (posttest).

This type of randomized experimental design is generally strong in internal validity. It rules out threats of history, maturation, testing, instrumentation, mortality and selection interactions. Its primary weaknesses are in the potential for treatment-related mortality (i.e. a type of selection-mortality) and for problems that result from the reactions of participants and administrators to knowledge of the varying experimental conditions. In this study, the drop-out rate was 4% (N=9) for the control group and 5% (N=13) in the treatment group. Because these rates are low and are approximately equal in each group, it is not plausible that there is differential mortality. There is a possibility that there were some deleterious effects due to participant knowledge of the other group’s existence (e.g. compensatory rivalry, resentful demoralization). Staff were debriefed at several points throughout the study and were explicitly asked about such issues. There were no reports of any apparent negative feelings from the participants in this regard. Nor is it plausible that staff might have equalized conditions between the two groups. Staff were given extensive training and were monitored throughout the course of the study. Overall, this study can be considered strong with respect to internal validity.

Between 3/1/93 and 2/28/95 each person admitted to Thresholds who met the study inclusion criteria was immediately assigned a random number that gave them a 50/50 chance of being selected into the study sample. For those selected, the purpose of the study was explained, including the nature of the two treatments, and the need for and use of random assignment. Participants were assured confidentiality and were given an opportunity to decline to participate in the study. Only 7 people (out of 491) refused to participate. At intake, each selected sample member was assigned a random number giving them a 50/50 chance of being assigned to either the Supported Employment condition or the standard in-agency sheltered workshop. In addition, all study participants were given the four measures at intake.

All participants spent the initial two weeks in the program in training and orientation. This consisted of life skill training (e.g. handling money, getting around, cooking and nutrition) and job preparation (employee roles, coping strategies). At the end of that period, each participant was assigned to a job site – at the agency sheltered workshop for those in the control condition, and to an outside employer if in the Supported Employment group. Control participants were expected to work full-time at the sheltered workshop for a three-month period, at which point they were posttested and given an opportunity to obtain outside employment (either Supported Employment or not). The Supported Employment participants were each assigned a case worker – called a Mobile Job Support Worker (MJSW) – who met with the person at the job site two times per week for an hour each time. The MJSW could provide any support or assistance deemed necessary to help the person cope with job stress, including counseling or working beside the person for short periods of time. In addition, the MJSW was always accessible by cellular telephone, and could be called by the participant or the employer at any time. At the end of three months, each participant was post-tested and given the option of staying with their current job (with or without Supported Employment) or moving to the sheltered workshop.

There were 484 participants in the final sample for this study, 242 in each treatment. There were 9 drop-outs from the control group and 13 from the treatment group, leaving a total of 233 and 229 in each group respectively from whom both pretest and posttest were obtained. Due to unexpected difficulties in coping with job stress, 19 Supported Employment participants had to be transferred into the sheltered workshop prior to the posttest. In all 19 cases, no one was transferred prior to week 6 of employment, and 15 were transferred after week 8. In all analyses, these cases were included with the Supported Employment group (intent-to-treat analysis) yielding treatment effect estimates that are likely to be conservative.

The major results for the four outcome measures are shown in Figure 1.

Insert Figure 1 about here

It is immediately apparent that in all four cases the null hypothesis has to be accepted – contrary to expectations, Supported Employment cases did significantly worse on all four outcomes than did control participants.

The mean gains, standard deviations, sample sizes and t-values (t-test for differences in average gain) are shown for the four outcome measures in Table 1.

Insert Table 1 about here

The results in the table confirm the impressions in the figures. Note that all t-values are negative except for the BPRS where high scores indicate greater severity of illness. For all four outcomes, the t-values were statistically significant (p<.05).

Conclusions

The results of this study were clearly contrary to initial expectations. The alternative hypothesis suggested that SE participants would show improved psychological functioning and self esteem after three months of employment. Exactly the reverse happened – SE participants showed significantly worse psychological functioning and self esteem.

There are two major possible explanations for this outcome pattern. First, it seems reasonable that there might be a delayed positive or “boomerang” effect of employment outside of a sheltered setting. SE cases may have to go through an initial difficult period of adjustment (longer than three months) before positive effects become apparent. This “you have to get worse before you get better” theory is commonly held in other treatment-contexts like drug addiction and alcoholism. But a second explanation seems more plausible – that people working full-time jobs in real-world settings are almost certainly going to be under greater stress and experience more negative outcomes than those who work in the relatively safe confines of an in-agency sheltered workshop. Put more succinctly, the lesson here might very well be that work is hard. Sheltered workshops are generally very nurturing work environments where virtually all employees share similar illness histories and where expectations about productivity are relatively low. In contrast, getting a job at a local hamburger shop or as a shipping clerk puts the person in contact with co-workers who may not be sympathetic to their histories or forgiving with respect to low productivity. This second explanation seems even more plausible in the wake of informal debriefing sessions held as focus groups with the staff and selected research participants. It was clear in the discussion that SE persons experienced significantly higher job stress levels and more negative consequences. However, most of them also felt that the experience was a good one overall and that even their “normal” co-workers “hated their jobs” most of the time.

One lesson we might take from this study is that much of our contemporary theory in psychiatric rehabilitation is naive at best and, in some cases, may be seriously misleading. Theory led us to believe that outside work was a “good” thing that would naturally lead to “good” outcomes like increased psychological functioning and self esteem. But for most people (SMI or not) work is at best tolerable, especially for the types of low-paying service jobs available to study participants. While people with SMI may not function as well or have high self esteem, we should balance this with the desire they may have to “be like other people” including struggling with the vagaries of life and work that others struggle with.

Future research in this are needs to address the theoretical assumptions about employment outcomes for persons with SMI. It is especially important that attempts to replicate this study also try to measure how SE participants feel about the decision to work, even if traditional outcome indicators suffer. It may very well be that negative outcomes on traditional indicators can be associated with a “positive” impact for the participants and for the society as a whole.

Chadsey-Rusch, J. and Rusch, F.R. (1986). The ecology of the workplace. In J. Chadsey-Rusch, C. Haney-Maxwell, L. A. Phelps and F. R. Rusch (Eds.), School-to-Work Transition Issues and Models. (pp. 59-94), Champaign IL: Transition Institute at Illinois.

Ciardiello, J.A. (1981). Job placement success of schizophrenic clients in sheltered workshop programs. Vocational Evaluation and Work Adjustment Bulletin, 14, 125-128, 140.

Cook, J.A. (1992). Job ending among youth and adults with severe mental illness. Journal of Mental Health Administration, 19(2), 158-169.

Cook, J.A. & Hoffschmidt, S. (1993). Psychosocial rehabilitation programming: A comprehensive model for the 1990’s. In R.W. Flexer and P. Solomon (Eds.), Social and Community Support for People with Severe Mental Disabilities: Service Integration in Rehabilitation and Mental Health. Andover, MA: Andover Publishing.

Cook, J.A., Jonikas, J., & Solomon, M. (1992). Models of vocational rehabilitation for youth and adults with severe mental illness. American Rehabilitation, 18, 3, 6-32.

Cook, J.A. & Razzano, L. (1992). Natural vocational supports for persons with severe mental illness: Thresholds Supported Competitive Employment Program, in L. Stein (ed.), New Directions for Mental Health Services, San Francisco: Jossey-Bass, 56, 23-41.

Endicott, J.R., Spitzer, J.L. Fleiss, J.L. and Cohen, J. (1976). The Global Assessment Scale: A procedure for measuring overall severity of psychiatric disturbance. Archives of General Psychiatry, 33, 766-771.

Griffiths, R.D. (1974). Rehabilitation of chronic psychotic patients. Psychological Medicine, 4, 316-325.

Overall, J. E. and Gorham, D. R. (1962). The Brief Psychiatric Rating Scale. Psychological Reports, 10, 799-812.

Rosenberg, M. (1965). Society and Adolescent Self Image. Princeton, NJ, Princeton University Press.

Wehman, P. (1985). Supported competitive employment for persons with severe disabilities. In P. McCarthy, J. Everson, S. Monn & M. Barcus (Eds.), School-to-Work Transition for Youth with Severe Disabilities, (pp. 167-182), Richmond VA: Virginia Commonwealth University.

Whitehead, C.W. (1977). Sheltered Workshop Study: A Nationwide Report on Sheltered Workshops and their Employment of Handicapped Individuals. (Workshop Survey, Volume 1), U.S. Department of Labor Service Publication. Washington, DC: U.S. Government Printing Office.

Woest, J., Klein, M. and Atkins, B.J. (1986). An overview of supported employment strategies. Journal of Rehabilitation Administration, 10(4), 130-135.

Figure 1. Pretest and posttest means for treatment (SE) and control groups for the four outcome measures.

The Employment Self Esteem Scale

Please rate how strongly you agree or disagree with each of the following statements.

Table of contents

How to write a research methodology, why is a methods section important, step 1: explain your methodological approach, step 2: describe your data collection methods, step 3: describe your analysis method, step 4: evaluate and justify the methodological choices you made, tips for writing a strong methodology chapter, other interesting articles, frequently asked questions about methodology.

Prevent plagiarism. Run a free check.

Your methods section is your opportunity to share how you conducted your research and why you chose the methods you chose. It’s also the place to show that your research was rigorously conducted and can be replicated .

It gives your research legitimacy and situates it within your field, and also gives your readers a place to refer to if they have any questions or critiques in other sections.

You can start by introducing your overall approach to your research. You have two options here.

Option 1: Start with your “what”

What research problem or question did you investigate?

  • Aim to describe the characteristics of something?
  • Explore an under-researched topic?
  • Establish a causal relationship?

And what type of data did you need to achieve this aim?

  • Quantitative data , qualitative data , or a mix of both?
  • Primary data collected yourself, or secondary data collected by someone else?
  • Experimental data gathered by controlling and manipulating variables, or descriptive data gathered via observations?

Option 2: Start with your “why”

Depending on your discipline, you can also start with a discussion of the rationale and assumptions underpinning your methodology. In other words, why did you choose these methods for your study?

  • Why is this the best way to answer your research question?
  • Is this a standard methodology in your field, or does it require justification?
  • Were there any ethical considerations involved in your choices?
  • What are the criteria for validity and reliability in this type of research ? How did you prevent bias from affecting your data?

Once you have introduced your reader to your methodological approach, you should share full details about your data collection methods .

Quantitative methods

In order to be considered generalizable, you should describe quantitative research methods in enough detail for another researcher to replicate your study.

Here, explain how you operationalized your concepts and measured your variables. Discuss your sampling method or inclusion and exclusion criteria , as well as any tools, procedures, and materials you used to gather your data.

Surveys Describe where, when, and how the survey was conducted.

  • How did you design the questionnaire?
  • What form did your questions take (e.g., multiple choice, Likert scale )?
  • Were your surveys conducted in-person or virtually?
  • What sampling method did you use to select participants?
  • What was your sample size and response rate?

Experiments Share full details of the tools, techniques, and procedures you used to conduct your experiment.

  • How did you design the experiment ?
  • How did you recruit participants?
  • How did you manipulate and measure the variables ?
  • What tools did you use?

Existing data Explain how you gathered and selected the material (such as datasets or archival data) that you used in your analysis.

  • Where did you source the material?
  • How was the data originally produced?
  • What criteria did you use to select material (e.g., date range)?

The survey consisted of 5 multiple-choice questions and 10 questions measured on a 7-point Likert scale.

The goal was to collect survey responses from 350 customers visiting the fitness apparel company’s brick-and-mortar location in Boston on July 4–8, 2022, between 11:00 and 15:00.

Here, a customer was defined as a person who had purchased a product from the company on the day they took the survey. Participants were given 5 minutes to fill in the survey anonymously. In total, 408 customers responded, but not all surveys were fully completed. Due to this, 371 survey results were included in the analysis.

  • Information bias
  • Omitted variable bias
  • Regression to the mean
  • Survivorship bias
  • Undercoverage bias
  • Sampling bias

Qualitative methods

In qualitative research , methods are often more flexible and subjective. For this reason, it’s crucial to robustly explain the methodology choices you made.

Be sure to discuss the criteria you used to select your data, the context in which your research was conducted, and the role you played in collecting your data (e.g., were you an active participant, or a passive observer?)

Interviews or focus groups Describe where, when, and how the interviews were conducted.

  • How did you find and select participants?
  • How many participants took part?
  • What form did the interviews take ( structured , semi-structured , or unstructured )?
  • How long were the interviews?
  • How were they recorded?

Participant observation Describe where, when, and how you conducted the observation or ethnography .

  • What group or community did you observe? How long did you spend there?
  • How did you gain access to this group? What role did you play in the community?
  • How long did you spend conducting the research? Where was it located?
  • How did you record your data (e.g., audiovisual recordings, note-taking)?

Existing data Explain how you selected case study materials for your analysis.

  • What type of materials did you analyze?
  • How did you select them?

In order to gain better insight into possibilities for future improvement of the fitness store’s product range, semi-structured interviews were conducted with 8 returning customers.

Here, a returning customer was defined as someone who usually bought products at least twice a week from the store.

Surveys were used to select participants. Interviews were conducted in a small office next to the cash register and lasted approximately 20 minutes each. Answers were recorded by note-taking, and seven interviews were also filmed with consent. One interviewee preferred not to be filmed.

  • The Hawthorne effect
  • Observer bias
  • The placebo effect
  • Response bias and Nonresponse bias
  • The Pygmalion effect
  • Recall bias
  • Social desirability bias
  • Self-selection bias

Mixed methods

Mixed methods research combines quantitative and qualitative approaches. If a standalone quantitative or qualitative study is insufficient to answer your research question, mixed methods may be a good fit for you.

Mixed methods are less common than standalone analyses, largely because they require a great deal of effort to pull off successfully. If you choose to pursue mixed methods, it’s especially important to robustly justify your methods.

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

research methods report example

Try for free

Next, you should indicate how you processed and analyzed your data. Avoid going into too much detail: you should not start introducing or discussing any of your results at this stage.

In quantitative research , your analysis will be based on numbers. In your methods section, you can include:

  • How you prepared the data before analyzing it (e.g., checking for missing data , removing outliers , transforming variables)
  • Which software you used (e.g., SPSS, Stata or R)
  • Which statistical tests you used (e.g., two-tailed t test , simple linear regression )

In qualitative research, your analysis will be based on language, images, and observations (often involving some form of textual analysis ).

Specific methods might include:

  • Content analysis : Categorizing and discussing the meaning of words, phrases and sentences
  • Thematic analysis : Coding and closely examining the data to identify broad themes and patterns
  • Discourse analysis : Studying communication and meaning in relation to their social context

Mixed methods combine the above two research methods, integrating both qualitative and quantitative approaches into one coherent analytical process.

Above all, your methodology section should clearly make the case for why you chose the methods you did. This is especially true if you did not take the most standard approach to your topic. In this case, discuss why other methods were not suitable for your objectives, and show how this approach contributes new knowledge or understanding.

In any case, it should be overwhelmingly clear to your reader that you set yourself up for success in terms of your methodology’s design. Show how your methods should lead to results that are valid and reliable, while leaving the analysis of the meaning, importance, and relevance of your results for your discussion section .

  • Quantitative: Lab-based experiments cannot always accurately simulate real-life situations and behaviors, but they are effective for testing causal relationships between variables .
  • Qualitative: Unstructured interviews usually produce results that cannot be generalized beyond the sample group , but they provide a more in-depth understanding of participants’ perceptions, motivations, and emotions.
  • Mixed methods: Despite issues systematically comparing differing types of data, a solely quantitative study would not sufficiently incorporate the lived experience of each participant, while a solely qualitative study would be insufficiently generalizable.

Remember that your aim is not just to describe your methods, but to show how and why you applied them. Again, it’s critical to demonstrate that your research was rigorously conducted and can be replicated.

1. Focus on your objectives and research questions

The methodology section should clearly show why your methods suit your objectives and convince the reader that you chose the best possible approach to answering your problem statement and research questions .

2. Cite relevant sources

Your methodology can be strengthened by referencing existing research in your field. This can help you to:

  • Show that you followed established practice for your type of research
  • Discuss how you decided on your approach by evaluating existing research
  • Present a novel methodological approach to address a gap in the literature

3. Write for your audience

Consider how much information you need to give, and avoid getting too lengthy. If you are using methods that are standard for your discipline, you probably don’t need to give a lot of background or justification.

Regardless, your methodology should be a clear, well-structured text that makes an argument for your approach, not just a list of technical details and procedures.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles

Methodology

  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

In a scientific paper, the methodology always comes after the introduction and before the results , discussion and conclusion . The same basic structure also applies to a thesis, dissertation , or research proposal .

Depending on the length and type of document, you might also include a literature review or theoretical framework before the methodology.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. & George, T. (2023, November 20). What Is a Research Methodology? | Steps & Tips. Scribbr. Retrieved August 8, 2024, from https://www.scribbr.com/dissertation/methodology/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research design | types, guide & examples, qualitative vs. quantitative research | differences, examples & methods, what is your plagiarism score.

Research Methods In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

The researcher decides where the experiment will take place, at what time, with which participants, in what circumstances,  using a standardized procedure.

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

Meta-analysis is a statistical procedure used to combine and synthesize findings from multiple independent studies to estimate the average effect size for a particular research question.

Meta-analysis goes beyond traditional narrative reviews by using statistical methods to integrate the results of several studies, leading to a more objective appraisal of the evidence.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

  • Strengths : Increases the conclusions’ validity as they’re based on a wider range.
  • Weaknesses : Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

9 Best Marketing Research Methods to Know Your Buyer Better [+ Examples]

Ramona Sukhraj

Published: August 08, 2024

One of the most underrated skills you can have as a marketer is marketing research — which is great news for this unapologetic cyber sleuth.

marketer using marketer research methods to better understand her buyer personas

From brand design and product development to buyer personas and competitive analysis, I’ve researched a number of initiatives in my decade-long marketing career.

And let me tell you: having the right marketing research methods in your toolbox is a must.

Market research is the secret to crafting a strategy that will truly help you accomplish your goals. The good news is there is no shortage of options.

How to Choose a Marketing Research Method

Thanks to the Internet, we have more marketing research (or market research) methods at our fingertips than ever, but they’re not all created equal. Let’s quickly go over how to choose the right one.

research methods report example

Free Market Research Kit

5 Research and Planning Templates + a Free Guide on How to Use Them in Your Market Research

  • SWOT Analysis Template
  • Survey Template
  • Focus Group Template

Download Free

All fields are required.

You're all set!

Click this link to access this resource at any time.

1. Identify your objective.

What are you researching? Do you need to understand your audience better? How about your competition? Or maybe you want to know more about your customer’s feelings about a specific product.

Before starting your research, take some time to identify precisely what you’re looking for. This could be a goal you want to reach, a problem you need to solve, or a question you need to answer.

For example, an objective may be as foundational as understanding your ideal customer better to create new buyer personas for your marketing agency (pause for flashbacks to my former life).

Or if you’re an organic sode company, it could be trying to learn what flavors people are craving.

2. Determine what type of data and research you need.

Next, determine what data type will best answer the problems or questions you identified. There are primarily two types: qualitative and quantitative. (Sound familiar, right?)

  • Qualitative Data is non-numerical information, like subjective characteristics, opinions, and feelings. It’s pretty open to interpretation and descriptive, but it’s also harder to measure. This type of data can be collected through interviews, observations, and open-ended questions.
  • Quantitative Data , on the other hand, is numerical information, such as quantities, sizes, amounts, or percentages. It’s measurable and usually pretty hard to argue with, coming from a reputable source. It can be derived through surveys, experiments, or statistical analysis.

Understanding the differences between qualitative and quantitative data will help you pinpoint which research methods will yield the desired results.

For instance, thinking of our earlier examples, qualitative data would usually be best suited for buyer personas, while quantitative data is more useful for the soda flavors.

However, truth be told, the two really work together.

Qualitative conclusions are usually drawn from quantitative, numerical data. So, you’ll likely need both to get the complete picture of your subject.

For example, if your quantitative data says 70% of people are Team Black and only 30% are Team Green — Shout out to my fellow House of the Dragon fans — your qualitative data will say people support Black more than Green.

(As they should.)

Primary Research vs Secondary Research

You’ll also want to understand the difference between primary and secondary research.

Primary research involves collecting new, original data directly from the source (say, your target market). In other words, it’s information gathered first-hand that wasn’t found elsewhere.

Some examples include conducting experiments, surveys, interviews, observations, or focus groups.

Meanwhile, secondary research is the analysis and interpretation of existing data collected from others. Think of this like what we used to do for school projects: We would read a book, scour the internet, or pull insights from others to work from.

So, which is better?

Personally, I say any research is good research, but if you have the time and resources, primary research is hard to top. With it, you don’t have to worry about your source's credibility or how relevant it is to your specific objective.

You are in full control and best equipped to get the reliable information you need.

3. Put it all together.

Once you know your objective and what kind of data you want, you’re ready to select your marketing research method.

For instance, let’s say you’re a restaurant trying to see how attendees felt about the Speed Dating event you hosted last week.

You shouldn’t run a field experiment or download a third-party report on speed dating events; those would be useless to you. You need to conduct a survey that allows you to ask pointed questions about the event.

This would yield both qualitative and quantitative data you can use to improve and bring together more love birds next time around.

Best Market Research Methods for 2024

Now that you know what you’re looking for in a marketing research method, let’s dive into the best options.

Note: According to HubSpot’s 2024 State of Marketing report, understanding customers and their needs is one of the biggest challenges facing marketers today. The options we discuss are great consumer research methodologies , but they can also be used for other areas.

Primary Research

1. interviews.

Interviews are a form of primary research where you ask people specific questions about a topic or theme. They typically deliver qualitative information.

I’ve conducted many interviews for marketing purposes, but I’ve also done many for journalistic purposes, like this profile on comedian Zarna Garg . There’s no better way to gather candid, open-ended insights in my book, but that doesn’t mean they’re a cure-all.

What I like: Real-time conversations allow you to ask different questions if you’re not getting the information you need. They also push interviewees to respond quickly, which can result in more authentic answers.

What I dislike: They can be time-consuming and harder to measure (read: get quantitative data) unless you ask pointed yes or no questions.

Best for: Creating buyer personas or getting feedback on customer experience, a product, or content.

2. Focus Groups

Focus groups are similar to conducting interviews but on a larger scale.

In marketing and business, this typically means getting a small group together in a room (or Zoom), asking them questions about various topics you are researching. You record and/or observe their responses to then take action.

They are ideal for collecting long-form, open-ended feedback, and subjective opinions.

One well-known focus group you may remember was run by Domino’s Pizza in 2009 .

After poor ratings and dropping over $100 million in revenue, the brand conducted focus groups with real customers to learn where they could have done better.

It was met with comments like “worst excuse for pizza I’ve ever had” and “the crust tastes like cardboard.” But rather than running from the tough love, it took the hit and completely overhauled its recipes.

The team admitted their missteps and returned to the market with better food and a campaign detailing their “Pizza Turn Around.”

The result? The brand won a ton of praise for its willingness to take feedback, efforts to do right by its consumers, and clever campaign. But, most importantly, revenue for Domino’s rose by 14.3% over the previous year.

The brand continues to conduct focus groups and share real footage from them in its promotion:

What I like: Similar to interviewing, you can dig deeper and pivot as needed due to the real-time nature. They’re personal and detailed.

What I dislike: Once again, they can be time-consuming and make it difficult to get quantitative data. There is also a chance some participants may overshadow others.

Best for: Product research or development

Pro tip: Need help planning your focus group? Our free Market Research Kit includes a handy template to start organizing your thoughts in addition to a SWOT Analysis Template, Survey Template, Focus Group Template, Presentation Template, Five Forces Industry Analysis Template, and an instructional guide for all of them. Download yours here now.

3. Surveys or Polls

Surveys are a form of primary research where individuals are asked a collection of questions. It can take many different forms.

They could be in person, over the phone or video call, by email, via an online form, or even on social media. Questions can be also open-ended or closed to deliver qualitative or quantitative information.

A great example of a close-ended survey is HubSpot’s annual State of Marketing .

In the State of Marketing, HubSpot asks marketing professionals from around the world a series of multiple-choice questions to gather data on the state of the marketing industry and to identify trends.

The survey covers various topics related to marketing strategies, tactics, tools, and challenges that marketers face. It aims to provide benchmarks to help you make informed decisions about your marketing.

It also helps us understand where our customers’ heads are so we can better evolve our products to meet their needs.

Apple is no stranger to surveys, either.

In 2011, the tech giant launched Apple Customer Pulse , which it described as “an online community of Apple product users who provide input on a variety of subjects and issues concerning Apple.”

Screenshot of Apple’s Consumer Pulse Website from 2011.

"For example, we did a large voluntary survey of email subscribers and top readers a few years back."

While these readers gave us a long list of topics, formats, or content types they wanted to see, they sometimes engaged more with content types they didn’t select or favor as much on the surveys when we ran follow-up ‘in the wild’ tests, like A/B testing.”  

Pepsi saw similar results when it ran its iconic field experiment, “The Pepsi Challenge” for the first time in 1975.

The beverage brand set up tables at malls, beaches, and other public locations and ran a blindfolded taste test. Shoppers were given two cups of soda, one containing Pepsi, the other Coca-Cola (Pepsi’s biggest competitor). They were then asked to taste both and report which they preferred.

People overwhelmingly preferred Pepsi, and the brand has repeated the experiment multiple times over the years to the same results.

What I like: It yields qualitative and quantitative data and can make for engaging marketing content, especially in the digital age.

What I dislike: It can be very time-consuming. And, if you’re not careful, there is a high risk for scientific error.

Best for: Product testing and competitive analysis

Pro tip:  " Don’t make critical business decisions off of just one data set," advises Pamela Bump. "Use the survey, competitive intelligence, external data, or even a focus group to give you one layer of ideas or a short-list for improvements or solutions to test. Then gather your own fresh data to test in an experiment or trial and better refine your data-backed strategy."

Secondary Research

8. public domain or third-party research.

While original data is always a plus, there are plenty of external resources you can access online and even at a library when you’re limited on time or resources.

Some reputable resources you can use include:

  • Pew Research Center
  • McKinley Global Institute
  • Relevant Global or Government Organizations (i.e United Nations or NASA)

It’s also smart to turn to reputable organizations that are specific to your industry or field. For instance, if you’re a gardening or landscaping company, you may want to pull statistics from the Environmental Protection Agency (EPA).

If you’re a digital marketing agency, you could look to Google Research or HubSpot Research . (Hey, I know them!)

What I like: You can save time on gathering data and spend more time on analyzing. You can also rest assured the data is from a source you trust.

What I dislike: You may not find data specific to your needs.

Best for: Companies under a time or resource crunch, adding factual support to content

Pro tip: Fellow HubSpotter Iskiev suggests using third-party data to inspire your original research. “Sometimes, I use public third-party data for ideas and inspiration. Once I have written my survey and gotten all my ideas out, I read similar reports from other sources and usually end up with useful additions for my own research.”

9. Buy Research

If the data you need isn’t available publicly and you can’t do your own market research, you can also buy some. There are many reputable analytics companies that offer subscriptions to access their data. Statista is one of my favorites, but there’s also Euromonitor , Mintel , and BCC Research .

What I like: Same as public domain research

What I dislike: You may not find data specific to your needs. It also adds to your expenses.

Best for: Companies under a time or resource crunch or adding factual support to content

Which marketing research method should you use?

You’re not going to like my answer, but “it depends.” The best marketing research method for you will depend on your objective and data needs, but also your budget and timeline.

My advice? Aim for a mix of quantitative and qualitative data. If you can do your own original research, awesome. But if not, don’t beat yourself up. Lean into free or low-cost tools . You could do primary research for qualitative data, then tap public sources for quantitative data. Or perhaps the reverse is best for you.

Whatever your marketing research method mix, take the time to think it through and ensure you’re left with information that will truly help you achieve your goals.

Don't forget to share this post!

Related articles.

SWOT Analysis: How To Do One [With Template & Examples]

SWOT Analysis: How To Do One [With Template & Examples]

28 Tools & Resources for Conducting Market Research

28 Tools & Resources for Conducting Market Research

What is a Competitive Analysis — and How Do You Conduct One?

What is a Competitive Analysis — and How Do You Conduct One?

Market Research: A How-To Guide and Template

Market Research: A How-To Guide and Template

TAM, SAM & SOM: What Do They Mean & How Do You Calculate Them?

TAM, SAM & SOM: What Do They Mean & How Do You Calculate Them?

How to Run a Competitor Analysis [Free Guide]

How to Run a Competitor Analysis [Free Guide]

5 Challenges Marketers Face in Understanding Audiences [New Data + Market Researcher Tips]

5 Challenges Marketers Face in Understanding Audiences [New Data + Market Researcher Tips]

Causal Research: The Complete Guide

Causal Research: The Complete Guide

Total Addressable Market (TAM): What It Is & How You Can Calculate It

Total Addressable Market (TAM): What It Is & How You Can Calculate It

What Is Market Share & How Do You Calculate It?

What Is Market Share & How Do You Calculate It?

Free Guide & Templates to Help Your Market Research

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

American Psychological Association

Sample Figures

These sample figures illustrate how to set up figures in APA Style. Note that any kind of visual display that is not a table is considered a figure.

There many ways to make a figure, and the samples shown on this page represent only some of the possibilities. The samples show the following options:

  • The sample bar graph and the sample line graph show how to use color in combination with pattern and shape to make an attractive and accessible figure .
  • The sample line graph shows how to include a copyright attribution in a figure note when you have reprinted or adapted a copyrighted figure from a scholarly work such as a journal article (the format of the copyright attribution will vary depending on the source of the figure).
  • The CONSORT flowchart demonstrates how to describe the flow of participants through a study. Further information and a template for the flowchart are available on the CONSORT website .
  • The sample map shows how to include a copyright attribution in a figure note when you have reprinted or adapted a figure from a work in the public domain (in the example, U.S. Census Bureau data).

Use these links to go directly to the sample figures:

Sample bar graph

Sample line graph, sample consort flowchart, sample path model, sample qualitative research figure, sample mixed methods research figure, sample illustration of experimental stimuli.

These sample figures are also available as a downloadable Word file (DOCX, 37KB) . For more sample figures, see the Publication Manual as well as published articles in your field.

Sample figures are covered in the seventh edition APA Style manuals in the Publication Manual Section 7.36 and the Concise Guide Section 7.32

research methods report example

Related handout

  • Student Paper Setup Guide (PDF, 3MB)

Framing Scores for Different Reward Sizes

Sample bar graph showing framing scores for three levels of reward and four different age groups.

Note . Framing scores of adolescents and young adults are shown for low and high risks and for small, medium, and large rewards (error bars show standard errors).

Mean Regression Slopes in Experiment 1

Sample line graph showing regression slopes for four conditions in the experiment.

Note . Mean regression slopes in Experiment 1 are shown for the stereo motion, biocularly viewed monocular motion, combined, and monocularly viewed monocular motion conditions, plotted by rotation amount. Error bars represent standard errors. From “Large Continuous Perspective Change With Noncoplanar Points Enables Accurate Slant Perception,” by X. M. Wang, M. Lind, and G. P. Bingham, 2018, Journal of Experimental Psychology: Human Perception and Performance , 44 (10), p. 1513 ( https://doi.org/10.1037/xhp0000553 ). Copyright 2018 by the American Psychological Association.

CONSORT Flowchart of Participants

Sample CONSORT flowchart describing flow of participants through a study.

Path Analysis Model of Associations Between ASMC and Body-Related Constructs

Sample path model of how appearance-related social media consciousness and time spent on social media are related to body esteem, body comparison, and body surveillance.

Note . The path analysis shows associations between ASMC and endogenous body-related variables (body esteem, body comparison, and body surveillance), controlling for time spent on social media. Coefficients presented are standardized linear regression coefficients. *** p < .001.

Organizational Framework for Racial Microaggressions in the Workplace

Sample flowchart describing racial microaggressions in the workplace, including examples of racial microaggressions, processes, and coping strategies.

A Multistage Paradigm for Integrative Mixed Methods Research

Sample diagram showing the six stages of qualitative textual evidence and the corresponding six stages of quantitative numeric evidence.

Examples of Stimuli Used in Experiment 1

Two computer-generated cartoon bees, one with two legs, a striped body, single wings, and antennae, and the other with six legs, a spotted body, double wings, and no antennae.

Note . Stimuli were computer-generated cartoon bees that varied on four binary dimensions, for a total of 16 unique stimuli. They had two or six legs, a striped or spotted body, single or double wings, and antennae or no antennae. The two stimuli shown here demonstrate the use of opposite values on all four binary dimensions.

Poverty Rate in the United States, 2017

Map of the United States, with color gradients indicating percentage of people living in poverty.

Note. The map does not include data for Puerto Rico. Adapted from 2017 Poverty Rate in the United States , by U.S. Census Bureau, 2017 ( https://www.census.gov/library/visualizations/2018/comm/acs-poverty-map.html ). In the public domain.

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

JavaScript appears to be disabled on this computer. Please click here to see any active alerts .

Water Research on Per- and Polyfluoroalkyl Substances (PFAS)

Per- and Polyfluoroalkyl Substances (PFAS) are a complex class of chemicals that historically have been used in industry and consumer products and continue to be widely used today. PFAS can be persistent in the environment and the human body. PFAS contamination of water is a significant issue in the United States and a high priority for the EPA. 

EPA research on PFAS in water sources focuses on developing tools for evaluating and managing risks from PFAS, such as the development of analytical methods for measuring occurrence; drinking water and wastewater treatment approaches for removal; residual stream treatment and management; and identifying and characterizing  PFAS sources to design treatment management approaches.

EPA has a range of research on PFAS that is not limited to water resources. For more information about EPA PFAS research, please visit Research on Per- and Polyfluoroalkyl Substances (PFAS) or EPA’s primary PFAS webpage .

Analytical Methods for PFAS in Drinking Water, Wastewater, and Environmental Samples

EPA’s research includes the development of analytical methods for measuring large groups of PFAS in water and water-related samples. Total organic fluorine (TOF) and total oxidizable precursors (TOP) are analytical methods that help researchers determine whether a PFAS might be present by looking at certain components and precursors of PFAS. Traditional targeted methods require researchers to know what chemicals they are looking for, which can be time consuming and labor intensive. Researchers are building on existing analytical methods, as well as developing non-targeted methods that allow researchers to analyze and characterize thousands of unknown and new PFAS in a sample.

Read the Science Matters Story About Non-Targeted Analysis

Today, researchers can rapidly characterize thousands of never studied chemical compounds in a wide variety of environmental, residential, and biological media. This approach is called non-targeted analysis. Read about how researchers from EPA and North Carolina State University used the non- targeted approach to understand the impact of industrial discharges in one waterbody.

Related Research

  • PFAS Analytical Methods Development and Sampling Research

Treatment Technologies and Processes for Removing PFAS from Community Drinking Water

EPA’s research on drinking water treatment technologies for PFAS focuses on collecting existing treatment data from the literature and combining it with results from additional laboratory studies on PFAS removal. These studies use a wide variety of technologies, including granular activated carbon, ion exchange, and membranes. EPA researchers evaluate, model, and optimize treatment technologies, including applications for treatment at the water treatment plant prior the point of entry to the drinking water distribution system and at the point of end use.

Read the Science Matters Story About Point-of-Use Filters

To help homeowners make an informed decision when trying to reduce PFAS in their drinking water, EPA researchers conducted studies on several off-the-shelf, commercially available technologies, including granular activated carbon (GAC), reverse osmosis (RO), and ion exchange treatment systems to determine if they were capable of decreasing PFAS levels in drinking water. Read about their research results and how different filtration systems fared.

EPA’s research is conducted at multiple scales, including benchtop studies in the laboratory, pilot-scale studies both in the laboratory and in the field, and full-scale treatment studies at municipal drinking water treatment plants. EPA research focuses on a range of drinking water system sizes, such as drinking water systems serving large cities with populations of more than 10,000 people, to very small communities with populations of less than 100 people that may use point-of-use or point-of-entry systems. Research focuses on issues that impact systems with technical, financial, and managerial capability limitations.

  • Treatment and Control for Contaminants Research
  • Technical Support for Water Infrastructure

PFAS Sources and Occurrence in Water

Possible PFAS sources in water can include industrial wastewater, landfill leachate, washing facilities, or hospital wastewater. Knowing the sources of PFAS in water resources is critical for developing cost effective approaches to managing PFAS contamination. EPA’s research focuses on building and improving models to predict the fate and transport of PFAS in water, assess exposure pathways and risks, and identify and characterize PFAS sources and concentrations. Researchers apply these models and analyses to reduce possible exposures to PFAS and improve treatment technologies.

  • Wastewater Research
  • Wastewater Contaminants Research
  • Stormwater Management Research
  • A lternative Water Sources Research

Disposal and Destruction of PFAS

PFAS is difficult to treat in water and requires effective and economical solutions. This research focuses on the development and advancement of cost-effective, high-efficiency processes to remove PFAS from wastewater, stormwater, industrial and process wastes, leachates, biosolids, and residual streams.

  • Read the Science Matters story: EPA Researchers Explore Technology to Destroy PFAS

Models, Methods, and Tools

  • EPA Science Models and Research Tools (SMaRT) Search
  • Drinking Water Treatability Database (TDB)
  • Environmental Technologies Design Option Tool (ETDOT)
  • Water Treatment Models
  • Reviewing PFAS Analytical Methods Data for Environmental Samples (Technical Brief)
  • PFAS methods and guidance for sampling and analyzing water and other environmental media (Technical Brief)

Related Resources

  • Working List of PFAS Chemicals with Research Interest and Ongoing Work by EPA
  • Small Drinking Water Systems Research Webinar: PFAS Drinking Water Regulation and Treatment Methods Webinar (Presented on April 30, 2024)
  • Small Drinking Water Systems Research Webinar: Per- and Polyfluoroalkyl Substances (PFAS) Webinar (Presented on August 28, 2018)

Learn More About EPA Research

Training Opportunities

  • Annual Drinking Water Workshop

Webinar Invites

  • Small Drinking Water Systems Webinar Series
  • Water Research Webinar Series
  • Computational Toxicology Communities of Practice Webinar Series

PFAS Research Grants and Funded Projects

  • Water Research Home
  • Watersheds Research
  • Nutrients and Harmful Algal Blooms Research
  • Water Treatment and Infrastructure Research
  • Water Research Grants
  • Research Outputs
  • Training, Outreach, and Engagement
  • Share full article

Advertisement

Supported by

A Blood Test Accurately Diagnosed Alzheimer’s 90% of the Time, Study Finds

It was much more accurate than primary care doctors using cognitive tests and CT scans. The findings could speed the quest for an affordable and accessible way to diagnose patients with memory problems.

A microscopic image in green and orange showing a nerve cell of a person’s brain, with the cytoplasm in orange and the protein tau tangled in a green swirl.

By Pam Belluck

Scientists have made another major stride toward the long-sought goal of diagnosing Alzheimer’s disease with a simple blood test . On Sunday, a team of researchers reported that a blood test was significantly more accurate than doctors’ interpretation of cognitive tests and CT scans in signaling the condition.

The study , published Sunday in the journal JAMA, found that about 90 percent of the time the blood test correctly identified whether patients with memory problems had Alzheimer’s. Dementia specialists using standard methods that did not include expensive PET scans or invasive spinal taps were accurate 73 percent of the time, while primary care doctors using those methods got it right only 61 percent of the time.

“Not too long ago measuring pathology in the brain of a living human was considered just impossible,” said Dr. Jason Karlawish, a co-director of the Penn Memory Center at the University of Pennsylvania who was not involved in the research. “This study adds to the revolution that has occurred in our ability to measure what’s going on in the brain of living humans.”

The results, presented Sunday at the Alzheimer’s Association International Conference in Philadelphia, are the latest milestone in the search for affordable and accessible ways to diagnose Alzheimer’s, a disease that afflicts nearly seven million Americans and over 32 million people worldwide. Medical experts say the findings bring the field closer to a day when people might receive routine blood tests for cognitive impairment as part of primary care checkups, similar to the way they receive cholesterol tests.

“Now, we screen people with mammograms and PSA or prostate exams and other things to look for very early signs of cancer,” said Dr. Adam Boxer, a neurologist at the University of California, San Francisco, who was not involved in the study. “And I think we’re going to be doing the same thing for Alzheimer’s disease and hopefully other forms of neurodegeneration.”

In recent years, several blood tests have been developed for Alzheimer’s. They are currently used mostly to screen participants in clinical trials and by some specialists like Dr. Boxer to help pinpoint if a patient’s dementia is caused by Alzheimer’s or another condition.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Archaeology in space: The Sampling Quadrangle Assemblages Research Experiment (SQuARE) on the International Space Station. Report 1: Squares 03 and 05

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliations Department of Art, Chapman University, Orange, CA, United States of America, Space Engineering Research Center, University of Southern California, Marina del Rey, CA, United States of America

ORCID logo

Roles Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

Affiliation Department of History, Carleton University, Ottawa, ON, United States of America

Roles Conceptualization, Data curation, Methodology, Project administration, Supervision, Writing – review & editing

Affiliation College of Humanities, Arts and Social Sciences, Flinders University, Adelaide, Australia

Roles Software, Writing – original draft

Roles Investigation, Writing – original draft

Affiliation Archaeology Research Center, University of Southern California, Los Angeles, CA, United States of America

  • Justin St. P. Walsh, 
  • Shawn Graham, 
  • Alice C. Gorman, 
  • Chantal Brousseau, 
  • Salma Abdullah

PLOS

  • Published: August 7, 2024
  • https://doi.org/10.1371/journal.pone.0304229
  • Reader Comments

Fig 1

Between January and March 2022, crew aboard the International Space Station (ISS) performed the first archaeological fieldwork in space, the Sampling Quadrangle Assemblages Research Experiment (SQuARE). The experiment aimed to: (1) develop a new understanding of how humans adapt to life in an environmental context for which we are not evolutionarily adapted, using evidence from the observation of material culture; (2) identify disjunctions between planned and actual usage of facilities on a space station; (3) develop and test techniques that enable archaeological research at a distance; and (4) demonstrate the relevance of social science methods and perspectives for improving life in space. In this article, we describe our methodology, which involves a creative re-imagining of a long-standing sampling practice for the characterization of a site, the shovel test pit. The ISS crew marked out six sample locations (“squares”) around the ISS and documented them through daily photography over a 60-day period. Here we present the results from two of the six squares: an equipment maintenance area, and an area near exercise equipment and the latrine. Using the photographs and an innovative webtool, we identified 5,438 instances of items, labeling them by type and function. We then performed chronological analyses to determine how the documented areas were actually used. Our results show differences between intended and actual use, with storage the most common function of the maintenance area, and personal hygiene activities most common in an undesignated area near locations for exercise and waste.

Citation: Walsh JSP, Graham S, Gorman AC, Brousseau C, Abdullah S (2024) Archaeology in space: The Sampling Quadrangle Assemblages Research Experiment (SQuARE) on the International Space Station. Report 1: Squares 03 and 05. PLoS ONE 19(8): e0304229. https://doi.org/10.1371/journal.pone.0304229

Editor: Peter F. Biehl, University of California Santa Cruz, UNITED STATES OF AMERICA

Received: March 9, 2024; Accepted: May 7, 2024; Published: August 7, 2024

Copyright: © 2024 Walsh et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and its Supporting Information files.

Funding: JW was the recipient of funding from Chapman University’s Office of Research and Sponsored Programs to support the activities of Axiom Space as implementation partner for the research presented in this article. There are no associated grant numbers for this financial support. Axiom Space served in the role of a contractor hired by Chapman University for the purpose of overseeing logistics relating to our research. In-kind support in the form of ISS crew time and access to the space station’s facilities, also awarded to JW from the ISS National Laboratory, resulted from an unsolicited proposal, and therefore there is no opportunity title or number associated with our work. No salary was received by any of the investigators as a result of the grant support. No additional external funding was received for this study.

Competing interests: The authors have declared that no competing interests exist.

Introduction

The International Space Station Archaeological Project (ISSAP) aims to fill a gap in social science investigation into the human experience of long-duration spaceflight [ 1 – 3 ]. As the largest, most intensively inhabited space station to date, with over 270 visitors from 23 countries during more than 23 years of continuous habitation, the International Space Station (ISS) is the ideal example of a new kind of spacefaring community—“a microsociety in a miniworld” [ 4 ]. While it is possible to interview crew members about their experiences, the value of an approach focused on material culture is that it allows identification of longer-term patterns of behaviors and associations that interlocutors are unable or even unwilling to articulate. In this respect, we are inspired by previous examples of contemporary archaeology such as the Tucson Garbage Project and the Undocumented Migration Project [ 5 – 7 ]. We also follow previous discussions of material culture in space contexts that highlight the social and cultural features of space technology [ 8 , 9 ].

Our primary goal is to identify how humans adapt to life in a new environment for which our species has not evolved, one characterized by isolation, confinement, and especially microgravity. Microgravity introduces opportunities, such as the ability to move and work in 360 degrees, and to carry out experiments impossible in full Earth gravity, but also limitations, as unrestrained objects float away. The most routine activities carried out on Earth become the focus of intense planning and technological intervention in microgravity. By extension, our project also seeks to develop archaeological techniques that permit the study of other habitats in remote, extreme, or dangerous environments [ 10 , 11 ]. Since it is too costly and difficult to visit our archaeological site in person, we have to creatively re-imagine traditional archaeological methods to answer key questions. To date, our team has studied crew-created visual displays [ 12 , 13 ], meanings and processes associated with items returned to Earth [ 14 ], distribution of different population groups around the various modules [ 15 ], and the development of machine learning (ML) computational techniques to extract data about people and places, all from historic photographs of life on the ISS [ 16 ].

From January to March 2022, we developed a new dataset through the first archaeological work conducted off-Earth. We documented material culture in six locations around the ISS habitat, using daily photography taken by the crew which we then annotated and studied as evidence for changes in archaeological assemblages of material culture over time. This was the first time such data had been captured in a way that allowed statistical analysis. Here, we present the data and results from Squares 03 and 05, the first two sample locations to be completed.

Materials and methods

Square concept and planning.

Gorman proposed the concept behind the investigation, deriving it from one of the most traditional terrestrial archaeological techniques, the shovel test pit. This method is used to understand the overall characteristics of a site quickly through sampling. A site is mapped with a grid of one-meter squares. Some of the squares are selected for initial excavation to understand the likely spatial and chronological distribution of features across the entire site. In effect, the technique is a way to sample a known percentage of the entire site systematically. In the ISS application of this method, we documented a notional stratigraphy through daily photography, rather than excavation.

Historic photography is a key dataset for the International Space Station Archaeological Project. Tens of thousands of images have been made available to us, either through publication [ 17 ], or through an arrangement with the ISS Research Integration Office, which supplied previously unpublished images from the first eight years of the station’s habitation. These photographs are informative about the relationships between people, places, and objects over time in the ISS. However, they were taken randomly (from an archaeological perspective) and released only according to NASA’s priorities and rules. Most significantly, they were not made with the purpose of answering archaeological questions. By contrast, the photographs taken during the present investigation were systematic, representative of a defined proportion of the habitat’s area, and targeted towards capturing archaeology’s primary evidence: material culture. We were interested in how objects move around individual spaces and the station, what these movements revealed about crew adherence to terrestrial planning, and the creative use of material culture to make the laboratory-like interior of the ISS more habitable.

Access to the field site was gained through approval of a proposal submitted to the Center for the Advancement of Science in Space (also known as the ISS National Laboratory [ISS NL]). Upon acceptance, Axiom Space was assigned as the Implementation Partner for carriage of the experiment according to standard procedure. No other permits were required for this work.

Experiment design

Since our work envisioned one-meter sample squares, and recognizing the use of acronyms as a persistent element of spacefaring culture, we named our payload the Sampling Quadrangle Assemblages Research Experiment (SQuARE). Permission from the ISS NL to conduct SQuARE was contingent on using equipment that was already on board the space station. SQuARE required only five items: a camera, a wide-angle lens, adhesive tape (for marking the boundaries of the sample locations), a ruler (for scale), and a color calibration card (for post-processing of the images). All of these were already present on the ISS.

Walsh performed tests on the walls of a terrestrial art gallery to assess the feasibility of creating perfect one-meter squares in microgravity. He worked on a vertical surface, using the Pythagorean theorem to determine where the corners should be located. The only additional items used for these tests were two metric measuring tapes and a pencil for marking the wall (these were also already on the ISS). While it was possible to make a square this way, it also became clear that at least two people were needed to manage holding the tape measures in position while marking the points for the corners. This was not possible in the ISS context.

Walsh and Gorman identified seven locations for the placement of squares. Five of these were in the US Orbital Segment (USOS, consisting of American, European, and Japanese modules) and two in the Russian Orbital Segment. Unfortunately, tense relations between the US and Russian governments meant we could only document areas in the USOS. The five locations were (with their SQuARE designations):

  • 01—an experimental rack on the forward wall, starboard end, of the Japanese Experiment Module
  • 02—an experimental rack on the forward wall, port end, of the European laboratory module Columbus
  • 03—the starboard Maintenance Work Area (workstation) in the US Node 2 module
  • 04—the wall area “above” (according to typical crew body orientation) the galley table in the US Node 1 module
  • 05—the aft wall, center location, of the US Node 3 module

Our square selection encompassed different modules and activities, including work and leisure. We also asked the crew to select a sixth sample location based on their understanding of the experiment and what they thought would be interesting to document. They chose a workstation on the port wall of the US laboratory module, at the aft end, which they described in a debriefing following their return to Earth in June 2022 as “our central command post, like our shared office situation in the lab.” Results from the four squares not included here will appear in future publications.

Walsh worked with NASA staff to determine payload procedures, including precise locations for the placement of the tape that would mark the square boundaries. The squares could not obstruct other facilities or experiments, so (unlike in terrestrial excavations, where string is typically used to demarcate trench boundaries) only the corners of each square were marked, not the entire perimeter. We used Kapton tape due to its bright yellow-orange color, which aided visibility for the crew taking photographs and for us when cropping the images. In practice, due to space constraints, the procedures that could actually be performed by crew in the ISS context, and the need to avoid interfering with other ongoing experiments, none of the locations actually measured one square meter or had precise 90° corners like a trench on Earth.

On January 14, 2022, NASA astronaut Kayla Barron set up the sample locations, marking the beginning of archaeological work in space ( S1 Movie ). For 30 days, starting on January 21, a crew member took photos of the sample locations at approximately the same time each day; the process was repeated at a random time each day for a second 30-day period to eliminate biases. Photography ended on March 21, 2022. The crew were instructed not to move any items prior to taking the photographs. Walsh led image management, including color and barrel distortion correction, fixing the alignment of each image, and cropping them to the boundaries of the taped corners.

Data processing—Item tagging, statistics, visualizations

We refer to each day’s photo as a “context” by analogy with chronologically-linked assemblages of artifacts and installations at terrestrial archaeological sites ( S1 and S2 Datasets). As previously noted, each context represented a moment roughly 24 hours distant from the previous one, showing evidence of changes in that time. ISS mission planners attempted to schedule the activity at the same time in the first month, but there were inevitable changes due to contingencies. Remarkably, the average time between contexts in Phase 1 was an almost-perfect 24h 0m 13s. Most of the Phase 1 photos were taken between 1200 and 1300 GMT (the time zone in which life on the ISS is organized). In Phase 2, the times were much more variable, but the average time between contexts during this period was still 23h 31m 45s. The earliest Phase 2 photo was taken at 0815 GMT, and the latest at 2101. We did not identify any meaningful differences between results from the two phases.

Since the “test pits” were formed of images rather than soil matrices, we needed a tool to capture information about the identity, nature, and location of every object. An open-source image annotator platform [ 18 ] mostly suited our needs. Brousseau rebuilt the platform to work within the constraints of our access to the imagery (turning it into a desktop tool with secure access to our private server), to permit a greater range of metadata to be added to each item or be imported, to autosave, and to export the resulting annotations. The tool also had to respect privacy and security limitations required by NASA.

The platform Brousseau developed and iterated was rechristened “Rocket-Anno” ( S1 File ). For each context photograph, the user draws an outline around every object, creating a polygon; each polygon is assigned a unique ID and the user provides the relevant descriptive information, using a controlled vocabulary developed for ISS material culture by Walsh and Gorman. Walsh and Abdullah used Rocket-Anno to tag the items in each context for Squares 03 and 05. Once all the objects were outlined for every context’s photograph, the tool exported a JSON file with all of the metadata for both the images themselves and all of the annotations, including the coordinate points for every polygon ( S3 Dataset ). We then developed Python code using Jupyter “notebooks” (an interactive development environment) that ingests the JSON file and generates dataframes for various facets of the data. Graham created a “core” notebook that exports summary statistics, calculates Brainerd-Robinson coefficients of similarity, and visualizes the changing use of the square over time by indicating use-areas based on artifact types and subtypes ( S2 File ). Walsh and Abdullah also wrote detailed square notes with context-by-context discussions and interpretations of features and patterns.

We asked NASA for access to the ISS Crew Planner, a computer system that shows each astronaut’s tasks in five-minute increments, to aid with our interpretation of contexts, but were denied. As a proxy, we use another, less detailed source: the ISS Daily Summary Reports (DSRs), published on a semi-regular basis by NASA on its website [ 19 ]. Any activities mentioned in the DSRs often must be connected with a context by inference. Therefore, our conclusions are likely less precise than if we had seen the Crew Planner, but they also more clearly represent the result of simply observing and interpreting the material culture record.

The crew during our sample period formed ISS Expedition 66 (October 2021-March 2022). They were responsible for the movement of objects in the sample squares as they carried out their daily tasks. The group consisted of two Russians affiliated with Roscosmos (the Russian space agency, 26%), one German belonging to the European Space Agency (ESA, 14%), and four Americans employed by NASA (57%). There were six men (86%) and one woman (14%), approximately equivalent to the historic proportions in the ISS population (84% and 16%, respectively). The Russian crew had their sleeping quarters at the aft end of the station, in the Zvezda module. The ESA astronaut slept in the European Columbus laboratory module. The four NASA crew slept in the US Node 2 module (see below). These arrangements emphasize the national character of discrete spaces around the ISS, also evident in our previous study of population distributions [ 15 ]. Both of the sample areas in this study were located in US modules.

Square 03 was placed in the starboard Maintenance Work Area (MWA, Fig 1 ), one of a pair of workstations located opposite one another in the center of the Node 2 module, with four crew berths towards the aft and a series of five ports for the docking of visiting crew/cargo vehicles and two modules on the forward end ( Fig 2 ). Node 2 (sometimes called “Harmony”) is a connector that links the US, Japanese, and European lab modules. According to prevailing design standards when the workstation was developed, an MWA “shall serve as the primary location for servicing and repair of maximum sized replacement unit/system components” [ 20 ]. Historic images published by NASA showing its use suggested that its primary function was maintenance of equipment and also scientific work that did not require a specific facility such as a centrifuge or furnace.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

An open crew berth is visible at right. The yellow dotted line indicates the boundaries of the sample area. Credit: NASA/ISSAP.

https://doi.org/10.1371/journal.pone.0304229.g001

thumbnail

Credit: Tor Finseth, by permission, modified by Justin Walsh.

https://doi.org/10.1371/journal.pone.0304229.g002

Square 03 measured 90.3 cm (top) x 87.8 (left) x 89.4 (bottom) x 87.6 (right), for an area of approximately 0.79 m 2 . Its primary feature was a blue metal panel with 40 square loop-type Velcro patches arranged in four rows of ten. During daily photography, many items were attached to the Velcro patches (or held by a clip or in a resealable bag which had its own hook-type Velcro). Above and below the blue panel were additional Velcro patches placed directly on the white plastic wall surface. These patches were white, in different sizes and shapes and irregularly arranged, indicating that they had been placed on the wall in response to different needs. Some were dirty, indicating long use. The patches below the blue panel were rarely used during the sample period, but the patches above were used frequently to hold packages of wet wipes, as well as resealable bags with electrostatic dispersion kits and other items. Outside the sample area, the primary features were a crew berth to the right, and a blue metal table attached to the wall below. This table, the primary component of the MWA, “provides a rigid surface on which to perform maintenance tasks,” according to NASA [ 21 ]. It is modular and can be oriented in several configurations, from flat against the wall to horizontal ( i . e ., perpendicular to the wall). A laptop to the left of the square occasionally showed information about work happening in the area.

In the 60 context photos of Square 03, we recorded 3,608 instances of items, an average of 60.1 (median = 60.5) per context. The lowest count was 24 in context 2 (where most of the wall was hidden from view behind an opaque storage bag), and the highest was 75 in both contexts 20 and 21. For comparison between squares, we can also calculate the item densities per m 2 . The average count was 76.1/m 2 (minimum = 30, maximum = 95). The count per context ( Fig 3(A)) began much lower than average in the first three contexts because of a portable glovebag and a stowage bag that obscured much of the sample square. It rose to an above-average level which was sustained (with the exception of contexts 11 and 12, which involved the appearance of another portable glovebag) until about context 43, when the count dipped again and the area seemed to show less use. Contexts 42–59 showed below-average numbers, as much as 20% lower than previously.

thumbnail

(a) Count of artifacts in Square 03 over time. (b) Proportions of artifacts by function in Square 03. Credit: Rao Hamza Ali.

https://doi.org/10.1371/journal.pone.0304229.g003

74 types of items appeared at least once here, belonging to six categories: equipment (41%), office supplies (31%), electronic (17%), stowage (9%), media (1%), and food (<1%). To better understand the significance of various items in the archaeological record, we assigned them to functional categories ( Table 1 , Fig 3(B)) . 35% of artifacts were restraints, or items used for holding other things in place; 12% for tools; 9% for containers; 9% for writing items; 6% for audiovisual items; 6% for experimental items; 4% for lights; 4% for safety items; 4% for body maintenance; 4% for power items; 3% for computing items; 1% for labels; and less than 1% drinks. We could not identify a function for two percent of the items.

thumbnail

https://doi.org/10.1371/journal.pone.0304229.t001

One of the project goals is understanding cultural adaptations to the microgravity environment. We placed special attention on “gravity surrogates,” pieces of (often simple) technology that are used in space to replicate the terrestrial experience of things staying where they are placed. Gravity surrogates include restraints and containers. It is quite noticeable that gravity surrogates comprise close to half of all items (44%) in Square 03, while the tools category, which might have been expected to be most prominent in an area designated for maintenance, is less than one-third as large (12%). Adding other groups associated with work, such as “experiment” and “light,” only brings the total to 22%.

Square 05 (Figs 2 and 4 ) was placed in a central location on the aft wall of the multipurpose Node 3 (“Tranquility”) module. This module does not include any specific science facilities. Instead, there are two large pieces of exercise equipment, the TVIS (Treadmill with Vibration Isolation Stabilization System, on the forward wall at the starboard end), and the ARED (Advanced Resistive Exercise Device, on the overhead wall at the port end). Use of the machines forms a significant part of crew activities, as they are required to exercise for two hours each day to counteract loss of muscle mass and bone density, and enable readjustment to terrestrial gravity on their return. The Waste and Hygiene Compartment (WHC), which includes the USOS latrine, is also here, on the forward wall in the center of the module, opposite Square 05. Finally, three modules are docked at Node 3’s port end. Most notable is the Cupola, a kind of miniature module on the nadir side with a panoramic window looking at Earth. This is the most popular leisure space for the crew, who often describe the hours they spend there. The Permanent Multipurpose Module (PMM) is docked on the forward side, storing equipment, food, and trash. In previous expeditions, some crew described installing a curtain in the PMM to create a private space for changing clothes and performing body maintenance activities such as cleaning oneself [ 22 , 23 ], but it was unclear whether that continued to be its function during the expedition we observed. One crew member during our sample period posted a video on Instagram showing the PMM interior and their efforts to re-stow equipment in a bag [ 24 ]. The last space attached to Node 3 is an experimental inflatable module docked on the aft side, called the Bigelow Expandable Activity Module (BEAM), which is used for storage of equipment.

thumbnail

The yellow dotted line indicates the boundaries of the sample area. The ARED machine is at the far upper right, on the overhead wall. The TVIS treadmill is outside this image to the left, on the forward wall. The WHC is directly behind the photographer. Credit: NASA/ISSAP.

https://doi.org/10.1371/journal.pone.0304229.g004

Square 05 was on a mostly featureless wall, with a vertical handrail in the middle. Handrails are metal bars located throughout the ISS that are used by the crew to hold themselves in place or provide a point from which to propel oneself to another location. NASA’s most recent design standards acknowledge that “[t]hey also serve as convenient locations for temporary mounting, affixing, or restraint of loose equipment and as attachment points for equipment” [ 25 ]. The handrail in Square 05 was used as an impromptu object restraint when a resealable bag filled with other bags was squeezed between the handrail and the wall.

The Brine Processing Assembly (BPA), a white plastic box which separates water from other components of urine for treatment and re-introduction to the station’s drinkable water supply [ 26 ], was fixed to the wall outside the square boundaries at lower left. A bungee cord was attached to both sides of the box; the one on the right was connected at its other end to the handrail attachment bracket. Numerous items were attached to or wedged into this bungee cord during the survey, bringing “gravity” into being. A red plastic duct ran through the square from top center into the BPA. This duct led from the latrine via the overhead wall. About halfway through the survey period, in context 32, the duct was wrapped in Kapton tape. According to the DSR for that day, “the crew used duct tape [ sic ] to make a seal around the BPA exhaust to prevent odor permeation in the cabin” [ 27 ], revealing an aspect of the crew’s experience of this area that is captured only indirectly in the context photograph. Permanently attached to the wall were approximately 20 loop-type Velcro patches in many shapes and sizes, placed in a seemingly random pattern that likely indicates that they were put there at different times and for different reasons.

Other common items in Square 05 were a mirror, a laptop computer, and an experimental item belonging to the German space agency DLR called the Touch Array Assembly [ 28 ]. The laptop moved just three times, and only by a few centimeters each time, during the sample period. The Touch Array was a black frame enclosing three metal surfaces which were being tested for their bacterial resistance; members of the crew touched the surfaces at various moments during the sample period. Finally, and most prominent due to its size, frequency of appearance, and use (judged by its movement between context photos) was an unidentified crew member’s toiletry kit.

By contrast with Square 03, 05 was the most irregular sample location, roughly twice as wide as it was tall. Its dimensions were 111 cm (top) x 61.9 (left) x 111.4 (bottom) x 64.6 (right), for an area of approximately 0.7 m 2 , about 89% of Square 03. We identified 1,830 instances of items in the 60 contexts, an average of 30.5 (median = 32) per context. The minimum was 18 items in context 5, and the maximum was 39 in contexts 24, 51, and 52. The average item density was 43.6/m 2 (minimum = 26, maximum = 56), 57% of Square 03.

The number of items trended upward throughout the sample period ( Fig 5(A)) . The largest spike occurred in context 6 with the appearance of the toiletry kit, which stored (and revealed) a number of related items. The kit can also be linked to one of the largest dips in item count, seen from contexts 52 to 53, when it was closed (but remained in the square). Other major changes can often be attributed to the addition and removal of bungee cords, which had other items such as carabiners and brackets attached. For example, the dip seen in context 25 correlates with the removal of a bungee cord with four carabiners.

thumbnail

(a) Count of artifacts and average count in Square 05 over time. (b) Proportions of artifacts by function in Square 05. Credit: Rao Hamza Ali.

https://doi.org/10.1371/journal.pone.0304229.g005

41 different item types were found in Square 05, about 55% as many as in Square 03. These belonged to five different categories: equipment (63%), electronic (17%), stowage (10%), office supplies (5%), and food (2%). The distribution of function proportions was quite different in this sample location ( Table 2 and Fig 5(B)) . Even though restraints were still most prominent, making up 32% of all items, body maintenance was almost as high (30%), indicating how strongly this area was associated with the activity of cleaning and caring for oneself. Computing (8%, represented by the laptop, which seems not to have been used), power (8%, from various cables), container (7%, resealable bags and Cargo Transfer Bags), and hygiene (6%, primarily the BPA duct) were the next most common items. Experiment was the function of 4% of the items, mostly the Touch Array, which appeared in every context, followed by drink (2%) and life support (1%). Safety, audiovisual, food, and light each made up less than 1% of the functional categories.

thumbnail

https://doi.org/10.1371/journal.pone.0304229.t002

Tracking changes over time is critical to understanding the activity happening in each area. We now explore how the assemblages change by calculating the Brainerd-Robinson Coefficient of Similarity [ 29 , 30 ] as operationalized by Peeples [ 31 , 32 ]. This metric is used in archaeology for comparing all pairs of the contexts by the proportions of categorical artifact data, here functional type. Applying the coefficient to the SQuARE contexts enables identification of time periods for distinct activities using artifact function and frequency alone, independent of documentary or oral evidence.

Multiple phases of activities took place in the square. Moments of connected activity are visible as red clusters in contexts 0–2, 11–12, 28–32, and 41 ( Fig 6(A)) . Combining this visualization with close observation of the photos themselves, we argue that there are actually eight distinct chronological periods.

  • Contexts 0–2: Period 1 (S1 Fig in S3 File ) is a three-day period of work involving a portable glovebag (contexts 0–1) and a large blue stowage bag (context 2). It is difficult to describe trends in functional types because the glovebag and stowage bag obstruct the view of many objects. Items which appear at the top of the sample area, such as audiovisual and body maintenance items, are overemphasized in the data as a result. It appears that some kind of science is happening here, perhaps medical sample collection due to the presence of several small resealable bags visible in the glovebag. The work appears particularly intense in context 1, with the positioning of the video camera and light to point into the glovebag. These items indicate observation and oversight of crew activities by ground control. A white cargo transfer bag for storage and the stowage bag for holding packing materials in the context 2 photo likely relate to the packing of a Cargo Dragon vehicle that was docked to Node 2. The Dragon departed from the ISS for Earth, full of scientific samples, equipment, and crew personal items, a little more than three hours after the context 2 photo was taken [ 33 ].
  • Contexts 3–10: Period 2 (S2 Fig in S3 File ) was a “stable” eight-day period in the sample, when little activity is apparent, few objects were moved or transferred in or out the square, and the primary function of the area seems to be storage rather than work. In context 6, a large Post-It notepad appeared in the center of the metal panel with a phone number written on it. This number belonged to another astronaut, presumably indicating that someone on the ISS had been told to call that colleague on the ground (for reasons of privacy, and in accordance with NASA rules for disseminating imagery, we have blurred the number in the relevant images). In context 8, the same notepad sheet had new writing appear on it, this time reading “COL A1 L1,” the location of an experimental rack in the European lab module.
  • Contexts 11–12: Period 3 (S3 Fig in S3 File ) involves a second appearance of a portable glovebag (a different one from that used in contexts 0–1, according to its serial number), this time for a known activity, a concrete hardening experiment belonging to the European Space Agency [ 34 , 35 ]. This two-day phase indicates how the MWA space can be shared with non-US agencies when required. It also demonstrates the utility of this flexible area for work beyond biology/medicine, such as material science. Oversight of the crew’s activities by ground staff is evident from the positioning of the video camera and LED light pointing into the glovebag.
  • Contexts 13–27: Period 4 (S4 Fig in S3 File ) is another stable fifteen-day period, similar to Period 2. Many items continued to be stored on the aluminum panel. The LED light’s presence is a trace of the activity in Period 3 that persists throughout this phase. Only in context 25 can a movement of the lamp potentially be connected to an activity relating to one of the stored items on the wall: at least one nitrile glove was removed from a resealable bag behind the lamp. In general, the primary identifiable activity during Period 4 is storage.
  • Contexts 28–32: Period 5 (S5 Fig in S3 File ), by contrast, represents a short period of five days of relatively high and diverse activity. In context 28, a Microsoft Hololens augmented reality headset appeared. According to the DSR for the previous day, a training activity called Sidekick was carried out using the headset [ 36 ]. The following day, a Saturday, showed no change in the quantity or type of objects, but many were moved around and grouped by function—adhesive tape rolls were placed together, tools were moved from Velcro patches into pouches or straightened, and writing implements were placed in a vertical orientation when previously they were tilted. Context 29 represents a cleaning and re-organization of the sample area, which is a common activity for the crew on Saturdays [ 37 ]. Finally, in context 32, an optical coherence tomography scanner—a large piece of equipment for medical research involving crew members’ eyes—appeared [ 38 ]. This device was used previously during the sample period, but on the same day as the ESA concrete experiment, so that earlier work seems to have happened elsewhere [ 39 ].
  • Contexts 33–40: Period 6 (S6 Fig in S3 File ) is the third stable period, in which almost no changes are visible over eight days. The only sign of activity is a digital timer which was started six hours before the context 39 image was made and continued to run at least through context 42.
  • Context 41: Period 7 (S7 Fig in S3 File ) is a single context in which medical sample collection may have occurred. Resealable bags (some holding others) appeared in the center of the image and at lower right. One of the bags at lower right had a printed label reading “Reservoir Containers.” We were not able to discern which type of reservoir containers the label refers to, although the DSR for the day mentions “[Human Research Facility] Generic Saliva Collection,” without stating the location for this work [ 40 ]. Evidence from photos of other squares shows that labeled bags could be re-used for other purposes, so our interpretation of medical activity for this context is not conclusive.
  • Contexts 42–60: Period 8 (S8 Fig in S3 File ) is the last and longest period of stability and low activity—eighteen days in which no specific activity other than the storage of items can be detected. The most notable change is the appearance for the first time of a foil water pouch in the central part of the blue panel.

thumbnail

Visualization of Brainerd-Robinson similarity, compared context-by-context by item function, for (a) Square 03 and (b) Square 05. The more alike a pair of contexts is, the higher the coefficient value, with a context compared against itself where a value of 200 equals perfect similarity. The resulting matrix of coefficients is visualized on a scale from blue to red where blue is lowest and red is highest similarity. The dark red diagonal line indicates complete similarity, where each context is compared to itself. Dark blue represents a complete difference. Credit: Shawn Graham.

https://doi.org/10.1371/journal.pone.0304229.g006

In the standards used at the time of installation, “stowage space” was the sixth design requirement listed for the MWA after accessibility; equipment size capability; scratch-resistant surfaces; capabilities for electrical, mechanical, vacuum, and fluid support during maintenance; and the accommodation of diagnostic equipment [ 20 ]. Only capabilities for fabrication were listed lower than stowage. Yet 50 of the 60 contexts (83%) fell within stable periods where little or no activity is identifiable in Square 03. According to the sample results, therefore, this area seems to exist not for “maintenance,” but primarily for the storage and arrangement of items. The most recent update of the design standards does not mention the MWA, but states, “Stowage location of tool kits should be optimized for accessibility to workstations and/or maintenance workbenches” [ 25 ]. Our observation confirms the importance of this suggestion.

The MWA was also a flexible location for certain science work, like the concrete study or crew health monitoring. Actual maintenance of equipment was hardly in evidence in the sample (possibly contexts 25, 39, and 44), and may not even have happened at all in this location. Some training did happen here, such as review of procedures for the Electromagnetic Levitator camera (instructions for changing settings on a high-speed camera appeared on the laptop screen; the day’s DSR shows that this camera is part of the Electromagnetic Levitator facility, located in the Columbus module [ 41 ]. The training required the use of the Hololens system (context 28 DSR, cited above).

Although many item types were represented in Square 03, it became clear during data capture how many things were basically static, unmoving and therefore unused, especially certain tools, writing implements, and body maintenance items. The MWA was seen as an appropriate place to store these items. It may be the case that their presence here also indicates that their function was seen as an appropriate one for this space, but the function(s) may not be carried out—or perhaps not in this location. Actualization of object function was only visible to us when the state of the item changed—it appeared, it moved, it changed orientation, it disappeared, or, in the case of artifacts that were grouped in collections rather than found as singletons, its shape changed or it became visibly smaller/lesser. We therefore have the opportunity to explore not only actuality of object use, but also potentiality of use or function, and the meaning of that quality for archaeological interpretation [ 42 , 43 ]. This possibility is particularly intriguing in light of the archaeological turn towards recognizing the agency of objects to impact human activity [ 44 , 45 ]. We will explore these implications in a future publication.

We performed the same chronological analysis for Square 05. Fig 6(B) represents the analysis for both item types and for item functions. We identified three major phases of activity, corresponding to contexts 0–5, 6–52, and 53–59 (S9-S11 Figs in S3 File ). The primary characteristics of these phases relate to an early period of unclear associations (0–5) marked by the presence of rolls of adhesive tape and a few body maintenance items (toothpaste and toothbrush, wet wipes); the appearance of a toiletry kit on the right side of the sample area, fully open with clear views of many of the items contained within (6–52); and finally, the closure of the toiletry kit so that its contents can no longer be seen (53–59). We interpret the phases as follows:

  • Contexts 0–5: In Period 1 (six days, S9 Fig in S3 File ), while items such as a mirror, dental floss picks, wet wipes, and a toothbrush held in the end of a toothpaste tube were visible, the presence of various other kinds of items confounds easy interpretation. Two rolls of duct tape were stored on the handrail in the center of the sample area, and the Touch Array and laptop appeared in the center. Little movement can be identified, apart from a blue nitrile glove that appeared in context 1 and moved left across the area until it was wedged into the bungee cord for contexts 3 and 4. The tape rolls were removed prior to context 5. A collection of resealable bags was wedged behind the handrail in context 3, remaining there until context 9. Overall, this appears to be a period characterized by eclectic associations, showing an area without a clear designated function.
  • Contexts 6–52: Period 2 (S10 Fig in S3 File ) is clearly the most significant one for this location due to its duration (47 days). It was dominated by the number of body maintenance items located in and around the toiletry kit, especially a white hand towel (on which a brown stain was visible from context 11, allowing us to confirm that the same towel was present until context 46). A second towel appeared alongside the toiletry kit in context 47, and the first one was fixed at the same time to the handrail, where it remained through the end of the sample period. A third towel appeared in context 52, attached to the handrail together with the first one by a bungee cord, continuing to the end of the sample period. Individual body maintenance items moved frequently from one context to the next, showing the importance of this type of activity for this part of Node 3. For reasons that are unclear, the mirror shifted orientation from vertical to diagonal in context 22, and then was put back in a vertical orientation in context 31 (a Monday, a day which is not traditionally associated with cleaning and organization). Collections of resealable bags appeared at various times, including a large one labeled “KYNAR BAG OF ZIPLOCKS” in green marker at the upper left part of the sample area beginning of context 12 (Kynar is a non-flammable plastic material that NASA prefers for resealable bags to the generic commercial off-the-shelf variety because it is non-flammable; however, its resistance to heat makes it less desirable for creating custom sizes, so bags made from traditional but flammable low-density polyethylene still dominate on the ISS [ 14 ]). The Kynar bag contained varying numbers of bags within it over time; occasionally, it appeared to be empty. The Touch Array changed orientation on seven of 47 days in period 2, or 15% of the time (12% of all days in the survey), showing activity associated with scientific research in this area. In context 49, a life-support item, the Airborne Particulate Monitor (APM) was installed [ 46 ]. This device, which measures “real-time particulate data” to assess hazards to crew health [ 47 ], persisted through the end of the sample period.
  • Contexts 53–59: Period 3 (S11 Fig in S3 File ) appears as a seven-day phase marked by low activity. Visually, the most notable feature is the closure of the toiletry kit, which led to much lower counts of body maintenance items. Hardly any of the items on the wall moved at all during this period.

While body maintenance in the form of cleaning and caring for oneself could be an expected function for an area with exercise and excretion facilities, it is worth noting that the ISS provides, at most, minimal accommodation for this activity. A description of the WHC stated, “To provide privacy…an enclosure was added to the front of the rack. This enclosure, referred to as the Cabin, is approximately the size of a typical bathroom stall and provides room for system consumables and hygiene item stowage. Space is available to also support limited hygiene functions such as hand and body washing” [ 48 ]. A diagram of the WHC in the same publication shows the Cabin without a scale but suggests that it measures roughly 2 m (h) x .75 (w) x .75 (d), a volume of approximately 1.125 m 3 . NASA’s current design standards state that the body volume of a 95th percentile male astronaut is 0.99 m 3 [ 20 ], meaning that a person of that size would take up 88% of the space of the Cabin, leaving little room for performing cleaning functions—especially if the Cabin is used as apparently intended, to also hold “system consumables and hygiene item[s]” that would further diminish the usable volume. This situation explains why crews try to adapt other spaces, such as storage areas like the PMM, for these activities instead. According to the crew debriefing statement, only one of them used the WHC for body maintenance purposes; it is not clear whether the toiletry kit belonged to that individual. But the appearance of the toiletry kit in Square 05—outside of the WHC, in a public space where others frequently pass by—may have been a response to the limitations of the WHC Cabin. It suggests a need for designers to re-evaluate affordances for body maintenance practices and storage for related items.

Although Square 03 and 05 were different sizes and shapes, comparing the density of items by function shows evidence of their usage ( Table 3 ). The typical context in Square 03 had twice as many restraints and containers, but less than one-quarter as many body maintenance items as Square 05. 03 also had many tools, lights, audiovisual equipment, and writing implements, while there were none of any of these types in 05. 05 had life support and hygiene items which were missing from 03. It appears that flexibility and multifunctionality were key elements for 03, while in 05 there was emphasis on one primary function (albeit an improvised one, designated by the crew rather than architects or ground control), cleaning and caring for one’s body, with a secondary function of housing static equipment for crew hygiene and life support.

thumbnail

https://doi.org/10.1371/journal.pone.0304229.t003

As this is the first time such an analysis has been performed, it is not yet possible to say how typical or unusual these squares are regarding the types of activities taking place; but they provide a baseline for eventual comparison with the other four squares and future work on ISS or other space habitats.

Some general characteristics are revealed by archaeological analysis of a space station’s material culture. First, even in a small, enclosed site, occupied by only a few people over a relatively short sample period, we can observe divergent patterns for different locations and activity phases. Second, while distinct functions are apparent for these two squares, they are not the functions that we expected prior to this research. As a result, our work fulfills the promise of the archaeological approach to understanding life in a space station by revealing new, previously unrecognized phenomena relating to life and work on the ISS. There is now systematically recorded archaeological data for a space habitat.

Squares 03 and 05 served quite different purposes. The reasons for this fact are their respective affordances and their locations relative to activity areas designated for science and exercise. Their national associations, especially the manifestation of the control wielded by NASA over its modules, also played a role in the use of certain materials, the placement of facilities, and the organization of work. How each area was used was also the result of an interplay between the original plans developed by mission planners and habitat designers (or the lack of such plans), the utility of the equipment and architecture in each location, and the contingent needs of the crew as they lived in the station. This interplay became visible in the station’s material culture, as certain areas were associated with particular behaviors, over time and through tradition—over the long duration across many crews (Node 2, location of Square 03, docked with the ISS in 2007, and Node 3, location of Square 05, docked in 2010), and during the specific period of this survey, from January to March 2022. During the crew debriefing, one astronaut said, “We were a pretty organized crew who was also pretty much on the same page about how to do things…. As time went on…we organized the lab and kind of got on the same page about where we put things and how we’re going to do things.” This statement shows how functional associations can become linked to different areas of the ISS through usage and mutual agreement. At the same time, the station is not frozen in time. Different people have divergent ideas about how and where to do things. It seems from the appearance of just one Russian item—a packet of generic wipes ( salfetky sukhiye ) stored in the toiletry kit throughout the sample period—that the people who used these spaces and carried out their functions did not typically include the ISS’s Russian crew. Enabling greater flexibility to define how spaces can be used could have a significant impact on improving crew autonomy over their lives, such as how and where to work. It could also lead to opening of all spaces within a habitat to the entire crew, which seems likely to improve general well-being.

An apparent disjunction between planned and actual usage appeared in Square 03. It is intended for maintenance as well as other kinds of work. But much of the time, there was nobody working here—a fact that is not captured by historic photos of the area, precisely because nothing is happening. The space has instead become the equivalent of a pegboard mounted on a wall in a home garage or shed, convenient for storage for all kinds of items—not necessarily items being used there—because it has an enormous number of attachment points. Storage has become its primary function. Designers of future workstations in space should consider that they might need to optimize for functions other than work, because most of the time, there might not be any work happening there. They could optimize for quick storage, considering whether to impose a system of organization, or allow users to organize as they want.

We expected from previous (though unsystematic) observation of historic photos and other research, that resealable plastic bags (combined with Velcro patches on the bags and walls) would be the primary means for creating gravity surrogates to control items in microgravity. They only comprise 7% of all items in Square 03 (256 instances). There are more than twice as many clips (572—more than 9 per context) in the sample. There were 193 instances of adhesive tape rolls, and more than 100 cable ties, but these were latent (not holding anything), representing potentiality of restraint rather than actualization. The squares showed different approaches to managing “gravity.” While Square 03 had a pre-existing structured array of Velcro patches, Square 05 showed a more expedient strategy with Velcro added in response to particular activities. Different needs require different affordances; creating “gravity” is a more nuanced endeavor than it initially appears. More work remains to be done to optimize gravity surrogates for future space habitats, because this is evidently one of the most critical adaptations that crews have to make in microgravity (44% of all items in Square 03, 39% in 05).

Square 05 is an empty space, seemingly just one side of a passageway for people going to use the lifting machine or the latrine, to look out of the Cupola, or get something out of deep storage in one of the ISS’s closets. In our survey, this square was a storage place for toiletries, resealable bags, and a computer that never (or almost never) gets used. It was associated with computing and hygiene simply by virtue of its location, rather than due to any particular facilities it possessed. It has no affordances for storage. There are no cabinets or drawers, as would be appropriate for organizing and holding crew personal items. A crew member decided that this was an appropriate place to leave their toiletry kit for almost two months. Whether this choice was appreciated or resented by fellow crew members cannot be discerned based on our evidence, but it seems to have been tolerated, given its long duration. The location of the other four USOS crew members’ toiletry kits during the sample period is unknown. A question raised by our observations is: how might a function be more clearly defined by designers for this area, perhaps by providing lockers for individual crew members to store their toiletries and towels? This would have a benefit not only for reducing clutter, but also for reducing exposure of toiletry kits and the items stored in them to flying sweat from the exercise equipment or other waste particles from the latrine. A larger compartment providing privacy for body maintenance and a greater range of motion would also be desirable.

As the first systematic collection of archaeological data from a space site outside Earth, this analysis of two areas on the ISS as part of the SQuARE payload has shown that novel insights into material culture use can be obtained, such as the use of wall areas as storage or staging posts between activities, the accretion of objects associated with different functions, and the complexity of using material replacements for gravity. These results enable better space station design and raise new questions that will be addressed through analysis of the remaining four squares.

Supporting information

S1 movie. nasa astronaut kayla barron installs the first square for the sampling quadrangle assemblages research experiment in the japanese experiment module (also known as kibo) on the international space station, january 14, 2022..

She places Kapton tape to mark the square’s upper right corner. Credit: NASA.

https://doi.org/10.1371/journal.pone.0304229.s001

S1 Dataset.

https://doi.org/10.1371/journal.pone.0304229.s002

S2 Dataset.

https://doi.org/10.1371/journal.pone.0304229.s003

S3 Dataset. The image annotations are represented according to sample square in json formatted text files.

The data is available in the ‘SQuARE-notebooks’ repository on Github.com in the ‘data’ subfolder at https://github.com/issarchaeologicalproject/SQuARE-notebooks/tree/main ; archived version of the repository is at Zenodo, DOI: 10.5281/zenodo.10654812 .

https://doi.org/10.1371/journal.pone.0304229.s004

S1 File. The ‘Rocket-Anno’ image annotation software is available on Github at https://github.com/issarchaeologicalproject/MRE-RocketAnno .

The archived version of the repository is at Zenodo, DOI: 10.5281/zenodo.10648399 .

https://doi.org/10.1371/journal.pone.0304229.s005

S2 File. The computational notebooks that process the data json files to reshape the data suitable for basic statistics as well as the computation of the Brainerd-Robinson coefficients of similarity are in the.ipynb notebook format.

The code is available in the ‘SQuARE-notebooks’ repository on Github.com in the ‘notebooks’ subfolder at https://github.com/issarchaeologicalproject/SQuARE-notebooks/tree/main ; archived version of the repository is at Zenodo, DOI: 10.5281/zenodo.10654812 . The software can be run online in the Google Colab environment ( https://colab.research.google.com ) or any system running Jupyter Notebooks ( https://jupyter.org/ ).

https://doi.org/10.1371/journal.pone.0304229.s006

https://doi.org/10.1371/journal.pone.0304229.s007

Acknowledgments

We thank Chapman University’s Office of Research and Sponsored Programs, and especially Dr. Thomas Piechota and Dr. Janeen Hill, for funding the Implementation Partner costs associated with the SQuARE payload. Chapman’s Leatherby Libraries’ Supporting Open Access Research and Scholarship (SOARS) program funded the article processing fee for this publication. Ken Savin and Ken Shields at the ISS National Laboratory gave major support by agreeing to sponsor SQuARE and providing access to ISS NL’s allocation of crew time. David Zuniga and Kryn Ambs at Axiom Space were key collaborators in managing payload logistics. NASA staff and contractors were critical to the experiment’s success, especially Kristen Fortson, Jay Weber, Crissy Canerday, Sierra Wolbert, and Jade Conway. We also gratefully acknowledge the help and resources provided by Dr. Erik Linstead, director of the Machine Learning and Affiliated Technology Lab at Chapman University. Aidan St. P. Walsh corrected the color and lens barrel distortion in all of the SQuARE imagery. Rao Hamza Ali produced charts using accessible color combinations for Figs 3 and 5 . And finally, of course, we are extremely appreciative of the efforts of the five USOS members of the Expedition 66 crew on the ISS—Kayla Barron, Raja Chari, Thomas Marshburn, Matthias Maurer, and Mark Vande Hei—who were the first archaeologists in space.

  • 1. Buchli V. Extraterrestrial methods: Towards an ethnography of the ISS. In: Carroll T, Walford A, Walton S, editors. Lineages and advancements in material culture studies: Perspectives from UCL anthropology. London: Routledge; 2021, pp. 17–32.
  • 2. Gorman A, Walsh J. Archaeology in a vacuum: obstacles to and solutions for developing a real space archaeology. In: Barnard H, editor. Archaeology outside the box: investigations at the edge of the discipline. Los Angeles, Cotsen Institute of Archaeology Press; 2023. pp. 131–123.
  • 3. Walsh J. Adapting to space: The International Space Station Archaeological Project. In: Salazar Sutil JF, Gorman A, editors. Routledge handbook of social studies of outer space. London, Routledge; 2023. pp. 400–412. https://doi.org/10.4324/9781003280507-37
  • View Article
  • Google Scholar
  • 6. Rathje W, Murphy C. Rubbish! The archaeology of garbage Tucson: University of Arizona Press; 2001.
  • 7. De León J. The land of open graves: living and dying on the migrant trail. Berkeley, University of California Press; 2015.
  • 8. Garrison Darrin A, O’Leary B, editors. Handbook of space engineering, archaeology, and heritage. Boca Raton, CRC Press; 2009.
  • 9. Capelotti PJ. The human archaeology of space: Lunar, planetary, and interstellar relics of exploration. Jefferson, NC, McFarland Press; 2010.
  • 11. Gorman A. Space and time through material culture: An account of space archaeology. In: Salazar Sutil JF, Gorman A, editors. Routledge handbook of social studies of outer space. London, Routledge; 2023. pp. 44–56. https://doi.org/10.4324/9781003280507-5
  • 17. NASA. NASA Johnson. 2008 Aug [cited May 12 2024]. In: Flickr [Internet]. San Francisco. Available from https://www.flickr.com/photos/nasa2explore/
  • 19. NASA. ISS Daily Status Reports. 2012 Mar 1 [Cited May 12 2024]. Available from: https://blogs.nasa.gov/stationreport/
  • 20. NASA. Man-systems integration. STD-3000 Vol. 1. Houston, NASA Johnson; 1995, pp. 9–15, 78
  • 21. NASA. Maintenance Work Area | Glenn Research Center. 2020 Mar 6 [cited May 12 2024]. Available from: https://www1.grc.nasa.gov/space/iss-research/mwa/
  • 22. Cristoforetti S. Diario di un’apprendista astronauta. Milan, Le Polene; 2018. pp. 379.
  • 23. Kelly S. Endurance: A year in space, a lifetime of discovery. New York, Knopf; 2017. pp. 175, 285–86.
  • 24. Barron K. Instagram post, 2022 Feb 12 [cited 2024 May 12]. Available from: https://www.instagram.com/tv/CZ4pW9HJ2Wg/?igsh=ZDE1MWVjZGVmZQ==
  • 25. NASA. NASA space flight human-system standard. STD-3001 Volume 1: Human integration design handbook. Rev. 1 Houston, NASA Johnson; 2014. pp. 814, 829–833.
  • 27. Keeter B. ISS daily summary report– 2/21/2022. 2022 Feb 21 [cited May 12 2024]. In: NASA ISS On-Orbit Status Report blog [Internet]. Houston. Available from: https://blogs.nasa.gov/stationreport/2022/02/page/6/
  • 28. DLR. Fingerprint research to combat harmful bacteria. 2022 Jan 18 [cited May 12 2024]. Available from: https://www.dlr.de/en/latest/news/2022/01/20220118_fingerprint-research-to-combat-harmful-bacteria
  • 31. Peeples MA. R script for calculating the Brainerd-Robinson coefficient of similarity and assessing sampling error. 2011 [cited May 12 2024]. Available from: http://www.mattpeeples.net/br.html .
  • 33. Garcia M. Cargo Dragon Splashes Down Ending SpaceX CRS-24 Mission. 2022 Jan 24 [cited May 12 2024]. NASA Space Station blog [Internet]. Available from: https://blogs.nasa.gov/spacestation/2022/01/24/cargo-dragon-splashes-down-ending-spacex-crs-24-mission/
  • 34. ESA. Concrete Hardening | Cosmic Kiss 360°. 2022 Mar 5 [cited May 12 2024]. Available from: https://www.esa.int/ESA_Multimedia/Videos/2022/05/Concrete_Hardening_Cosmic_Kiss_360
  • 35. Keeter B. ISS daily summary report– 2/01/2022. 2022 Feb 1 [cited May 12 2024]. In: NASA ISS On-Orbit Status Report blog [Internet]. Houston. Available from: https://blogs.nasa.gov/stationreport/2022/02/page/19/
  • 36. Keeter B. ISS daily summary report– 2/17/2022. 2022 Feb 17 [cited May 12 2024]. In: NASA ISS On-Orbit Status Report blog [Internet]. Houston. Available from: https://blogs.nasa.gov/stationreport/2022/02/page/8/
  • 37. T. Pultarova, How Do You Clean a Space Station? Astronaut Thomas Pesquet Shares Orbital Spring Cleaning Tips, Space.com, May 6, 2021. Online at https://www.space.com/space-station-cleaning-tips-astronaut-thomas-pesquet
  • 38. Keeter B. ISS daily summary report– 2/22/2022. 2022 Feb 22 [cited May 12 2024]. In: NASA ISS On-Orbit Status Report blog [Internet]. Houston. Available from: https://blogs.nasa.gov/stationreport/2022/02/page/5/
  • 39. Keeter B. ISS daily summary report– 2/02/2022. 2022 Feb 2 [cited May 12 2024]. NASA ISS On-Orbit Status Report blog [Internet]. Houston. Online at https://blogs.nasa.gov/stationreport/2022/02/page/18/
  • 40. Keeter B. ISS daily summary report– 3/03/2022. 2022 Mar 3 [cited May 12 2024]. In: NASA ISS On-Orbit Status Report blog [Internet]. Houston. Available from: https://blogs.nasa.gov/stationreport/2022/03/page/21/
  • 41. Keeter B. ISS daily summary report– 2/08/2022. 2022 Feb 8 [cited May 12 2024]. NASA ISS On-Orbit Status Report blog [Internet]. Houston. Available from: https://blogs.nasa.gov/stationreport/2022/02/page/15/
  • 42. Aristotle of Stageira. Metaphysics, Volume I: Books 1–9, Tredennick H, translator. Loeb Classical Library 271. Cambridge, MA, Harvard University Press; 1933. pp. 429–473.
  • 44. Hodder I. Entangled: An archaeology of the relationships between humans and things. Hoboken. NJ, Wiley-Blackwell; 2012.
  • 45. Malafouris L., How Things Shape the Mind: A Theory of Material Engagement (MIT Press, 2016).
  • 46. Keeter B. ISS daily summary report– 3/11/2022. 2022 Mar 11 [cited May 12 2024]. NASA ISS On-Orbit Status Report blog [Internet]. Houston. Available from: https://blogs.nasa.gov/stationreport/2022/03/page/15/

IMAGES

  1. FREE 17+ Sample Research Reports in PDF

    research methods report example

  2. (PDF) How to write the methods section of a research paper

    research methods report example

  3. 💌 Simple research methodology. Sample Research Methodology Chapter

    research methods report example

  4. 7+ Sample Research Report Templates

    research methods report example

  5. FREE 10+ Research Analysis Report Templates in PDF

    research methods report example

  6. Report Writing

    research methods report example

COMMENTS

  1. Research Report

    The methodology section describes the research design, methods, and procedures used to collect and analyze data. It should include information on the sample or participants, data collection instruments, data collection procedures, and data analysis techniques. ... For example, a research report on the effectiveness of a new drug could inform ...

  2. How to write the Methods section of a research paper

    3. Follow the order of the results: To improve the readability and flow of your manuscript, match the order of specific methods to the order of the results that were achieved using those methods. 4. Use subheadings: Dividing the Methods section in terms of the experiments helps the reader to follow the section better.

  3. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  4. How to Write an APA Methods Section

    Report all of the procedures applied for administering the study, processing the data, and for planned data analyses. Data collection methods and research design. Data collection methods refers to the general mode of the instruments: surveys, interviews, observations, focus groups, neuroimaging, cognitive tests, and so on. Summarize exactly how ...

  5. Writing a Research Report in American Psychological Association (APA

    The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one. ... Sample APA-Style Research Report. Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style ...

  6. PDF Writing a Research Report

    Use the section headings (outlined above) to assist with your rough plan. Write a thesis statement that clarifies the overall purpose of your report. Jot down anything you already know about the topic in the relevant sections. 3 Do the Research. Steps 1 and 2 will guide your research for this report.

  7. (PDF) Research Methodology WRITING A RESEARCH REPORT

    Nature of Research Qualitative Research Report This is the type of report is written for qualitative research. It outlines the methods, processes, and findings of a qualitative method of ...

  8. Methodology section in a report

    The method section of a report details how the research was conducted, the research methods used and the reasons for choosing those methods. It should outline: the participants and research methods used, e.g. surveys/questionnaire, interviews. refer to other relevant studies. The methodology is a step-by-step explanation of the research process ...

  9. PDF Reporting Qualitative Research in Psychology

    Mixed methods studies use both qualitative and quantitative methods. Chapter 9 describes the reporting standards for this form of research. Although the reporting standards for mixed methods research draw on the standards for both quantitative and qualitative research, they emphasize the need to report how these methods work

  10. What Is Qualitative Research?

    Qualitative research methods. Each of the research approaches involve using one or more data collection methods.These are some of the most common qualitative methods: Observations: recording what you have seen, heard, or encountered in detailed field notes. Interviews: personally asking people questions in one-on-one conversations. Focus groups: asking questions and generating discussion among ...

  11. Research Methodology Example (PDF + Template)

    Research Methodology Example. Detailed Walkthrough + Free Methodology Chapter Template. If you're working on a dissertation or thesis and are looking for an example of a research methodology chapter, you've come to the right place. In this video, we walk you through a research methodology from a dissertation that earned full distinction ...

  12. Examples of Methodology in Research Papers (With Definition)

    Example of a methodology in a research paper. The following example of a methodology in a research paper provides insight into the structure and content to consider when writing your own: This research article discusses the psychological and emotional impact of a mental health support program for employees. The program provided prolonged and ...

  13. Experimental Reports 2

    Procedure: The procedure includes the step-by-step how of your experiment. The procedure should include: A description of the experimental design and how participants were assigned conditions. Identification of your independent variable (s) (IV), dependent variable (s) (DV), and control variables. Give your variables clear, meaningful names so ...

  14. Scientific Reports

    What this handout is about. This handout provides a general guide to writing reports about scientific research you've performed. In addition to describing the conventional rules about the format and content of a lab report, we'll also attempt to convey why these rules exist, so you'll get a clearer, more dependable idea of how to approach this writing situation.

  15. Top 10 Qualitative Research Report Templates with Samples and Examples

    Template 9 : Swot Analysis of Qualitative Research Approach. Use this PowerPoint set to determine the strengths, weaknesses, opportunities, and threats facing your company. Each slide comes with a unique tool that may be utilized to strengthen your areas of weakness, grasp opportunities, and lessen risks.

  16. How to Write a Lab Report: Step-by-Step Guide & Examples

    A typical lab report would include the following sections: title, abstract, introduction, method, results, and discussion. The title page, abstract, references, and appendices are started on separate pages (subsections from the main body of the report are not). Use double-line spacing of text, font size 12, and include page numbers.

  17. 15 Research Methodology Examples (2024)

    15 Research Methodology Examples. Research methodologies can roughly be categorized into three group: quantitative, qualitative, and mixed-methods. Qualitative Research: This methodology is based on obtaining deep, contextualized, non-numerical data. It can occur, for example, through open-ended questioning of research particiapnts in order to ...

  18. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  19. Research Methodology

    Research methodology refers to the philosophical and theoretical frameworks that guide the research process. Research methods refer to the techniques and procedures used to collect and analyze data. It is concerned with the underlying principles and assumptions of research. It is concerned with the practical aspects of research. It provides a ...

  20. Case Study

    Defnition: A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation. It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied.

  21. Sample Paper

    Sample Paper. This paper should be used only as an example of a research paper write-up. Horizontal rules signify the top and bottom edges of pages. For sample references which are not included with this paper, you should consult the Publication Manual of the American Psychological Association, 4th Edition. This paper is provided only to give ...

  22. What Is a Research Methodology?

    Step 1: Explain your methodological approach. Step 2: Describe your data collection methods. Step 3: Describe your analysis method. Step 4: Evaluate and justify the methodological choices you made. Tips for writing a strong methodology chapter. Other interesting articles.

  23. Research Methods In Psychology

    Research Methods In Psychology. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  24. 9 Best Marketing Research Methods to Know Your Buyer Better [+ Examples]

    Best Market Research Methods for 2024. Now that you know what you're looking for in a marketing research method, let's dive into the best options. Note: According to HubSpot's 2024 State of Marketing report, understanding customers and their needs is one of the biggest challenges facing marketers today.

  25. Sample figures

    These sample figures illustrate how to set up figures in APA Style. Note that any kind of visual display that is not a table is considered a figure. Samples include bar graph, line graph, CONSORT flowchart, path model, qualitative research figure, mixed methods research figure, illustration of experimental stimuli, and map.

  26. Water Research on Per- and Polyfluoroalkyl Substances (PFAS)

    Traditional targeted methods require researchers to know what chemicals they are looking for, which can be time consuming and labor intensive. Researchers are building on existing analytical methods, as well as developing non-targeted methods that allow researchers to analyze and characterize thousands of unknown and new PFAS in a sample.

  27. PDF Eligibility:

    COMMUNITY ENGAGED RESEARCH(CEnR) PILOT GRANT PROGRAM . Page . 11. of . 15. power, and the statistical methods you will use with respect to each outcome measure. You will need to show that your methods for sample size and data analysis are appropriate given your plans for assignment of participants and delivery of interventions.

  28. Decomposition of Differences in Distribution under Sample Selection and

    The estimation is done using existing quantile regression methods, for which I show how to perform uniformly valid inference. I illustrate these methods by revisiting the gender wage gap, finding that changes in female participation and self-selection have been the main drivers for reducing the gap.

  29. A Blood Test Accurately Diagnosed Alzheimer's 90% of the Time, Study

    Scientists have made another major stride toward the long-sought goal of diagnosing Alzheimer's disease with a simple blood test.On Sunday, a team of researchers reported that a blood test was ...

  30. Archaeology in space: The Sampling Quadrangle Assemblages Research

    Between January and March 2022, crew aboard the International Space Station (ISS) performed the first archaeological fieldwork in space, the Sampling Quadrangle Assemblages Research Experiment (SQuARE). The experiment aimed to: (1) develop a new understanding of how humans adapt to life in an environmental context for which we are not evolutionarily adapted, using evidence from the observation ...