p | treatedIndex |
---|---|
Min. :0.2570 | Min. : 1.00 |
1st Qu.:0.4752 | 1st Qu.:10.00 |
Median :0.5324 | Median :21.00 |
Mean :0.5304 | Mean :20.87 |
3rd Qu.:0.5946 | 3rd Qu.:31.00 |
Max. :0.7804 | Max. :42.00 |
Treated is a dichotomous numerical varaible, that is 1 if the tv market watched the commercials, and is 0 if not. The mean here indicates that 49.41% of the tv markets were treated, and the remainders were untreated. In an experiment, researchers create a treatment group (those that saw the commercials) and a control group, in order to test for a difference.
r is the number of 18 and 19 year olds that voted in the 2004 election. The average tv market had 151 young registered voters that cast votes in the election.
n is the number of registered voters between the ages of 18 and 19 in each tv market.
p is the percentage of registered voters between the ages of 18 and 19 that voted in the election, meaning it could be calcualted by dividing r by n.
Strata and treatedIndex aren’t important for this exercise. The different tv markets were chosen because they were similar, so there is one market that saw the commercaisl and another similar market that didn’t. The varaible strata indicates which markets are matched together. treatedIndex indicates how many treated tv markets are above each observation. Full confession, I don’t totally understand what treatedIndex is supposed to be used for.
So to restate our hypotheses, we intend to test whether being in a tv market that saw commercails encouraging young adults to vote (treated) incaresed the voting rates among 18 and 19 year olds (p). The null hypothesis which we are attempting to reject is that there is no relationship between treated and p.
So what do we need to do to test the hypothesis that these tv commercials increased voting rates?
Last chapter we saw how similar the mean of the tour bus we found was to mean of the population of marathoners. Here, we don’t know what the population of 18 and 19 year old voters is. But we do have a control group, which we assume stands in for all 18 and 19 year olds. We’re assuming that the treated group is a random sample of the population of 18 and 19 year olds, so they should have the same exact voting rates as all other 18 and 19 year olds. However, they saw the commercials, so if there is a difference between the two groups, we can ascribe it to the commercials. Thus, we can test whether the mean voting rate among the tv markets that were treated with the commercials differs sigificantly.
Let’s start then by calculating the mean voting rate for the two groups, the treated tv markets and the control group. We can do that by using the subset() command to split RockTheVote into two data frames, based on whether the tv market was in the treated group or not.
The average voting rate among 18 and 19 year olds for the tv markets that saw the commercials is .545 or 54.5%, and the averge for the tv markets that were not treated is .516 or 51.6%. Interesting, the mean differs between the two samples.
However, as we learned last chapter, we should expet some variation between the means as we’re taking diferent samples. The means of samples will conform to a normal distribution over time, but we should expect varaiation for each individual mean. The question then is whether the mean of the treatment group differs significantly from the mean of the control group.
Statistical significance is important. Much of social science is driven by statistical significance. We’ll talk about the limitations later, for now though we can describe what we mean by that term. As we’ve discussed, the means of samples will differ from the mean of the population somewhat, and those means will differ by some number of standard deviations. We expect the majority of the data to fall within two standard deviations above or below the mean, and that very few will fall further away.
credit: Wikipedia
34.1 percent of the data falls within 1 standard deviation above and below the mean. That’s on both sides, so a total of 68.2 percent of the data falls between 1 standard deviation below the mean and one standard deviation above the mean. 13.6 percent of the data is between 1 and 2 standard deviations. In total, we expect 95.4 percent of the data to be within two standard deviations, either above or below the mean. - The Professor, one chapter earlier
That means, to state it a different way, that the probability that the mean of a sample taken from a population being within 2 standard deviations is .954, and the probability that it will fall further from the mean is only .046. That is fairly unlikely. So if the mean of the treatment group falls more than 2 standard deviations from the mean of the control group, that indicates it’s either a weird sample OR it isn’t from the same population. That’s what we concluded about the tour bus we found, it wasn’t drawn from the population of marathoners. And if the tv markets that saw the commercaials are that different from the markets that didn’t watch, we can conclued that they are different because of the commercials. The commercials had such a large effect on voting rates, they have changed voters.
So we know the means for the two groups, and we know they differ somewhat How do we test them to see if they come from the same poplation?
The easiest way is with what’s called a t-test, which quickly analyzes the means of two groups and determines how many standard deviations they are apart. A t-test can be used to test whether a sample comes from a certain population (marathoners, buses) or if two samples differ significantly. More often than not, you will use them to test whether two samples are different, generally with the goal of understanding whether some policy or intervention or trait makes two samples different - and the hope is to ascribe that difference to what we’re testing.
Essentially, a t-test does the work for us. Interpretting it correctly then becomes all the more important, but implementing it is straight forward with the command t.test(). Within the parentheses, we enter the two data frames and the varaible of interest. Here our two data frames are named treatment and control and the variable of interest is p
Test statistic | df | P value | Alternative hypothesis | mean of x | mean of y |
---|---|---|---|---|---|
1.354 | 83 | 0.1794 | two.sided | 0.5451 | 0.5161 |
We can slowely look over the output, and discuss each term that’s produced. These will help to clarify the nuts and bolts of a t-test further.
Let’s start with the headline takeaway. We want to test whether tv commercials encouraging young adults to vote would actually make them vote in higher numbers. We see the two means that we calucalted above. 54.5% of registered 18 and 19 year olds in communities where the commercials were shown vote, while in other tv markets only 51.6% did so. Is that significant?
The answer to that quesiton is shown below P value, and the result is no. We aren’t very sure that these two groups are different, even though there is a gap between the means. We think that difference might have just been produced by chance, or the luck of the draw in creating different samples. The p value indicates the chances that we could have generated the difference between the means by chance: .1794, or roughly .18 (18%), and we aren’t willing to declare something different if we’re only 18% sure they’re different.
Why are we that uncertain? Because the test statistic isn’t very big, which helps to indicate the distance betwene our two means. The formula for calculating a test statistic is complicated, but we will discuss it. It’s a bit like your mother letting you see everything she has to do to put together thanksgiving dinner, so that you learn not to complain. We’ll see what R just did for us, so that we can more fully apprecaite how nice the software is to us.
x1 and x2 our the means for the two groups we are comparing. In this case, we’ll call everyhing with a 1 the treatment group, and 2 the control group.
s1 and s2 are the standard deviations for the treatment and control group.
And n1 and n2 are the number of observations or the sample size of both groups.
That wasn’t so bad. Then we just throw it all together!
That matches. What was all of that we just did? Essentially, we look at how far the distance between the means is, relative to the variance in the data of both.
One way to intuatively undestand what all that means is to think about what would make the test statistic larger or smaller. A larger difference in means, would produce a larger statistic. Less variance, meaning data that was more tightly clustered, would produce a larger t statistic. And a larger sample size would produce a larger t statistic. Once more, a larger difference, less variation in the data, and more data all make us more certain that differnces are real.
df stands for degrees of freedom, which is the number of independent data values in our sample.
Finally, we have the alternative hypothesis. Here it says “two.sided”. That means we were testing whether the commericals either increased the share of voting, or decreased it - we were looking at both ends or two sides of the distribution. We can specify whether we want to only look at the area above the mean, below the mean, or at both ends as we have done.
Assuming we’re seeking a difference in the means that would only be predicted by chance with a probability of .05, which test is tougher? A two-tailed test. For a two tailed test we seek a p value of .05 at both tails, splitting it with .025 above the mean and .025 below the mean. A one-tailed test places all .05 above or below the mean. Below, the green lines show the cut off at both ends if we only look for the difference in one tail, whereas the red line shows what happens when we look in both tails. This is all to explain why the default option is two.sided, and to generally tell you to let the default stand.
That, was a lot. It might help to walk through another example a bit quicker where we just lay out the basics of a t-test. We can use some polling data for the 1992 election, that asked people who they voted for along with a few demographic questions.
The vote varaible shows who they voters voted for. dem and rep indicate the registered party of voters and females records their gender. The questions persfinance and natlecon indicate whether the respondont thought their personl finances had improved over the previous 4 years (Bush’s first term) and whether the national economy was improving. The other three varaibles require more math than we need right now, but they generally record how distant the voters views are from the candidates.
Let’s see whether personal finances drove people to vote for Bush’s relection.
H0: Personal finance made no difference in the election H1: Voters that felt their personal fiances improved voted more for George Bush
the vote variable has three levels.
We need to create a new variable that indicates just whether people voted for or against Bush, because for a T-test to operate we need two groups. Earlier our two groups were the treatment and the control for whether people watched the tv commercials. Here the two groups are wether people voted for Bush or not.
Rather than splitting the vote92 data set into two halves using subset (like we did earlier) we can just use the ~ operator. ~ is a t1lde mark. ~ can be used to define indicate the varaible being tested (persfinance) and the two groups for our analysis (Bush). This is a little quicker than using subset, and we’ll use the tilde mark in future work in the course.
The answer is yes, those who viewed their personal finances as improving were more likely to vote for Bush. The pvalue indicates that the difference in means between the two groups was highly unlikely to have occured by chance. It is not impossible, but it is highly unlikely so we can declare there is a significant difference.
Let’s think more about the example we just did. With the the 1992 eletion data, we declared that people with improving personal finances were more likely to vote for Bush. Why do we need test anything about them, we know who they voted for? It’s beause we have a sample of respondents, similar to an exit poll, but what we’re concnered about is all voters. We want to know if people outside the 909 we have data for were more likly to vote for Bush if their personal finances improved. That’s what the test is telling us, that there is a difference in the population (all voters). Just looking at the means between the two groups tells us that there is a difference in our sample. But we rarely care about the sample, what concerns us is projecting or inferring the qualities of others we haven’t asked.
That brings us to discuss the .05 test more directly. What would it have meant if the P value had been .06. Well, we would have failed to reject the null. We wouldn’t feel confident enough to say there is a difference in the population. But there would still be a difference in the sample.
Is there a large difference between a P value of .04 and .05 and .06? No, not really. and .05 is a fairly arbitrary standard. Probabilities exist as a continuoum without clear cut offs. A P value of .05 means we’re much more confident than a P value of .6 and a little more confident than a P value of .15. The standard for such a test has to be set somewhere, but we shouldn’t hold .05 as a golden standard.
What does a probability of .05 mean? Let’s think back to the chapter on probability’ it’s equivalent to 1/20. When we set .05 as a standard for hypothesis testing, we’re saying we want to know that there is only a 1 in 20 chance that the difference in voting rates created by the Rock The Vote commercials is by random luck, and to know that 19 out of 20 times it’ll be a true difference between the groups.
So when we get a P value of .05 and reject the null hypothesis, we’re doing so because we think a difference between the two groups is most likely explained by the commercials (or whatever we’re testing). But implicit in a .05 P value is that random chance isn’t impossible, just unlikely. But there is still a 1/20 chance that the difference in voting rates seen after the commercials just occured by random chance and had nothing to do with the commercial. And similarly to flipping a coin, if we do 20 seperate tests in one of them we’ll get a significant value that is generated by random chance. That is a false positive, and we can never identify it.
One approach then is to set a higher standard. We could only reject a null hypothesis if we get a P value of .01 or lower. That would mean only 1 in 100 significant results would be from chance along. Or we could use a standard of .001. That would help to reduce false positives, but not eliminate them still.
.05 has been described as the standard for rejecting the null hypothesis here, but it’s really more of a minimum. Scholars prefer their P values to be .01 or lower when possible, but generally accept .05 as indicative of a significant difference.
Let’s go back to how we calculated P values.
How can we get a larger t-statistic and be more likely to get a significant result? Having a larger difference in the means is one way. That would mean the numerator would get larger. The other way is to make the denomenator smaller, so that whatever the difference in the means is comparatively larger.
If we grow the size of our sample, the n1 and n2, that would shrink the denomenator. That makes intuative sense too. We shouldn’t be very confident if we talk to 10 people and find out that the democrats in the group like cookies more than the republicans. But if we talked to 10 million people, that would be a lot of evidence to disregard if there was a difference in our mean. As we grow our sample size, it becomes more likely that any difference in our means will create a significant finding with a P value of .05 or smaller.
That’s good right? It means we get more precise results, but it creates another concern. When we use larger quantitives of data it becomes necessary to ask whether the differences are significant, as well as large. If I surveyed 10 million voters and found that 72.1 percent of democrats like cookies and only 72.05 republicans like cookies, would the difference be significant?
Yes, that finding is very very significant. Is it meaningful? Not really. There is a statistical difference between the two groups, but that difference is so small it doesn’t help someone to plan a party or pick out deserts. With large enough samples the color of your shirt might impact pay by .13 cents or putting your left shoe on first might add 79 minutes to your life. But those differences lack magnitude to be valuable. Thus, as data sets grow in size it becomes important to test for significance, but also the magnitude of the differences to find what’s meaningfull. Unfortunately, evaluating whether a difference is large is a matter of opinion, and can’t be tested for with certainty.
Those are the basics of hypothesis tests with t-tests. We’ll continue to expand on the tests we can run in the following chapters. Next we’ll talk about a specific instance where we use the tools we’ve discussed: polling.
Hypothesis testing is the act of testing a hypothesis or a supposition in relation to a statistical parameter. Analysts implement hypothesis testing in order to test if a hypothesis is plausible or not.
In data science and statistics , hypothesis testing is an important step as it involves the verification of an assumption that could help develop a statistical parameter. For instance, a researcher establishes a hypothesis assuming that the average of all odd numbers is an even number.
In order to find the plausibility of this hypothesis, the researcher will have to test the hypothesis using hypothesis testing methods. Unlike a hypothesis that is ‘supposed’ to stand true on the basis of little or no evidence, hypothesis testing is required to have plausible evidence in order to establish that a statistical hypothesis is true.
Perhaps this is where statistics play an important role. A number of components are involved in this process. But before understanding the process involved in hypothesis testing in research methodology, we shall first understand the types of hypotheses that are involved in the process. Let us get started!
In data sampling, different types of hypothesis are involved in finding whether the tested samples test positive for a hypothesis or not. In this segment, we shall discover the different types of hypotheses and understand the role they play in hypothesis testing.
Alternative Hypothesis (H1) or the research hypothesis states that there is a relationship between two variables (where one variable affects the other). The alternative hypothesis is the main driving force for hypothesis testing.
It implies that the two variables are related to each other and the relationship that exists between them is not due to chance or coincidence.
When the process of hypothesis testing is carried out, the alternative hypothesis is the main subject of the testing process. The analyst intends to test the alternative hypothesis and verifies its plausibility.
The Null Hypothesis (H0) aims to nullify the alternative hypothesis by implying that there exists no relation between two variables in statistics. It states that the effect of one variable on the other is solely due to chance and no empirical cause lies behind it.
The null hypothesis is established alongside the alternative hypothesis and is recognized as important as the latter. In hypothesis testing, the null hypothesis has a major role to play as it influences the testing against the alternative hypothesis.
(Must read: What is ANOVA test? )
The Non-directional hypothesis states that the relation between two variables has no direction.
Simply put, it asserts that there exists a relation between two variables, but does not recognize the direction of effect, whether variable A affects variable B or vice versa.
The Directional hypothesis, on the other hand, asserts the direction of effect of the relationship that exists between two variables.
Herein, the hypothesis clearly states that variable A affects variable B, or vice versa.
A statistical hypothesis is a hypothesis that can be verified to be plausible on the basis of statistics.
By using data sampling and statistical knowledge, one can determine the plausibility of a statistical hypothesis and find out if it stands true or not.
(Related blog: z-test vs t-test )
Now that we have understood the types of hypotheses and the role they play in hypothesis testing, let us now move on to understand the process in a better manner.
In hypothesis testing, a researcher is first required to establish two hypotheses - alternative hypothesis and null hypothesis in order to begin with the procedure.
To establish these two hypotheses, one is required to study data samples, find a plausible pattern among the samples, and pen down a statistical hypothesis that they wish to test.
A random population of samples can be drawn, to begin with hypothesis testing. Among the two hypotheses, alternative and null, only one can be verified to be true. Perhaps the presence of both hypotheses is required to make the process successful.
At the end of the hypothesis testing procedure, either of the hypotheses will be rejected and the other one will be supported. Even though one of the two hypotheses turns out to be true, no hypothesis can ever be verified 100%.
(Read also: Types of data sampling techniques )
Therefore, a hypothesis can only be supported based on the statistical samples and verified data. Here is a step-by-step guide for hypothesis testing.
First things first, one is required to establish two hypotheses - alternative and null, that will set the foundation for hypothesis testing.
These hypotheses initiate the testing process that involves the researcher working on data samples in order to either support the alternative hypothesis or the null hypothesis.
Once the hypotheses have been formulated, it is now time to generate a testing plan. A testing plan or an analysis plan involves the accumulation of data samples, determining which statistic is to be considered and laying out the sample size.
All these factors are very important while one is working on hypothesis testing.
As soon as a testing plan is ready, it is time to move on to the analysis part. Analysis of data samples involves configuring statistical values of samples, drawing them together, and deriving a pattern out of these samples.
While analyzing the data samples, a researcher needs to determine a set of things -
Significance Level - The level of significance in hypothesis testing indicates if a statistical result could have significance if the null hypothesis stands to be true.
Testing Method - The testing method involves a type of sampling-distribution and a test statistic that leads to hypothesis testing. There are a number of testing methods that can assist in the analysis of data samples.
Test statistic - Test statistic is a numerical summary of a data set that can be used to perform hypothesis testing.
P-value - The P-value interpretation is the probability of finding a sample statistic to be as extreme as the test statistic, indicating the plausibility of the null hypothesis.
The analysis of data samples leads to the inference of results that establishes whether the alternative hypothesis stands true or not. When the P-value is less than the significance level, the null hypothesis is rejected and the alternative hypothesis turns out to be plausible.
As we have already looked into different aspects of hypothesis testing, we shall now look into the different methods of hypothesis testing. All in all, there are 2 most common types of hypothesis testing methods. They are as follows -
The frequentist hypothesis or the traditional approach to hypothesis testing is a hypothesis testing method that aims on making assumptions by considering current data.
The supposed truths and assumptions are based on the current data and a set of 2 hypotheses are formulated. A very popular subtype of the frequentist approach is the Null Hypothesis Significance Testing (NHST).
The NHST approach (involving the null and alternative hypothesis) has been one of the most sought-after methods of hypothesis testing in the field of statistics ever since its inception in the mid-1950s.
A much unconventional and modern method of hypothesis testing, the Bayesian Hypothesis Testing claims to test a particular hypothesis in accordance with the past data samples, known as prior probability, and current data that lead to the plausibility of a hypothesis.
The result obtained indicates the posterior probability of the hypothesis. In this method, the researcher relies on ‘prior probability and posterior probability’ to conduct hypothesis testing on hand.
On the basis of this prior probability, the Bayesian approach tests a hypothesis to be true or false. The Bayes factor, a major component of this method, indicates the likelihood ratio among the null hypothesis and the alternative hypothesis.
The Bayes factor is the indicator of the plausibility of either of the two hypotheses that are established for hypothesis testing.
(Also read - Introduction to Bayesian Statistics )
To conclude, hypothesis testing, a way to verify the plausibility of a supposed assumption can be done through different methods - the Bayesian approach or the Frequentist approach.
Although the Bayesian approach relies on the prior probability of data samples, the frequentist approach assumes without a probability. A number of elements involved in hypothesis testing are - significance level, p-level, test statistic, and method of hypothesis testing.
(Also read: Introduction to probability distributions )
A significant way to determine whether a hypothesis stands true or not is to verify the data samples and identify the plausible hypothesis among the null hypothesis and alternative hypothesis.
Be a part of our Instagram community
5 Factors Influencing Consumer Behavior
Elasticity of Demand and its Types
An Overview of Descriptive Analysis
What is PESTLE Analysis? Everything you need to know about it
What is Managerial Economics? Definition, Types, Nature, Principles, and Scope
5 Factors Affecting the Price Elasticity of Demand (PED)
6 Major Branches of Artificial Intelligence (AI)
Scope of Managerial Economics
Dijkstra’s Algorithm: The Shortest Path Algorithm
Different Types of Research Methods
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Edward barroga.
1 Department of Medical Education, Showa University School of Medicine, Tokyo, Japan.
2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.
Makiko arima, shizuma tsuchiya, chikako kawahara, yusuke takamiya.
Comprehensive knowledge of quantitative and qualitative research systematizes scholarly research and enhances the quality of research output. Scientific researchers must be familiar with them and skilled to conduct their investigation within the frames of their chosen research type. When conducting quantitative research, scientific researchers should describe an existing theory, generate a hypothesis from the theory, test their hypothesis in novel research, and re-evaluate the theory. Thereafter, they should take a deductive approach in writing the testing of the established theory based on experiments. When conducting qualitative research, scientific researchers raise a question, answer the question by performing a novel study, and propose a new theory to clarify and interpret the obtained results. After which, they should take an inductive approach to writing the formulation of concepts based on collected data. When scientific researchers combine the whole spectrum of inductive and deductive research approaches using both quantitative and qualitative research methodologies, they apply mixed-method research. Familiarity and proficiency with these research aspects facilitate the construction of novel hypotheses, development of theories, or refinement of concepts.
Novel research studies are conceptualized by scientific researchers first by asking excellent research questions and developing hypotheses, then answering these questions by testing their hypotheses in ethical research. 1 , 2 , 3 Before they conduct novel research studies, scientific researchers must possess considerable knowledge of both quantitative and qualitative research. 2
In quantitative research, researchers describe existing theories, generate and test a hypothesis in novel research, and re-evaluate existing theories deductively based on their experimental results. 1 , 4 , 5 In qualitative research, scientific researchers raise and answer research questions by performing a novel study, then propose new theories by clarifying their results inductively. 1 , 6
When researchers have a limited knowledge of both research types and how to conduct them, this can result in substandard investigation. Researchers must be familiar with both types of research and skilled to conduct their investigations within the frames of their chosen type of research. Thus, meticulous care is needed when planning quantitative and qualitative research studies to avoid unethical research and poor outcomes.
Understanding the methodological and writing assumptions 7 , 8 underpinning quantitative and qualitative research, especially by non-Anglophone researchers, is essential for their successful conduct. Scientific researchers, especially in the academe, face pressure to publish in international journals 9 where English is the language of scientific communication. 10 , 11 In particular, non-Anglophone researchers face challenges related to linguistic, stylistic, and discourse differences. 11 , 12 Knowing the assumptions of the different types of research will help clarify research questions and methodologies, easing the challenge and help.
To identify articles relevant to this topic, we adhered to the search strategy recommended by Gasparyan et al. 7 We searched through PubMed, Scopus, Directory of Open Access Journals, and Google Scholar databases using the following keywords: quantitative research, qualitative research, mixed-method research, deductive reasoning, inductive reasoning, study design, descriptive research, correlational research, experimental research, causal-comparative research, quasi-experimental research, historical research, ethnographic research, meta-analysis, narrative research, grounded theory, phenomenology, case study, and field research.
This article aims to provide a comparative appraisal of qualitative and quantitative research for scientific researchers. At present, there is still a need to define the scope of qualitative research, especially its essential elements. 13 Consensus on the critical appraisal tools to assess the methodological quality of qualitative research remains lacking. 14 Framing and testing research questions can be challenging in qualitative research. 2 In the healthcare system, it is essential that research questions address increasingly complex situations. Therefore, research has to be driven by the kinds of questions asked and the corresponding methodologies to answer these questions. 15 The mixed-method approach also needs to be clarified as this would appear to arise from different philosophical underpinnings. 16
This article also aims to discuss how particular types of research should be conducted and how they should be written in adherence to international standards. In the US, Europe, and other countries, responsible research and innovation was conceptualized and promoted with six key action points: engagement, gender equality, science education, open access, ethics and governance. 17 , 18 International ethics standards in research 19 as well as academic integrity during doctoral trainings are now integral to the research process. 20
This article would be beneficial for researchers in further enhancing their understanding of the theoretical, methodological, and writing aspects of qualitative and quantitative research, and their combination.
Moreover, this article reviews the basic features of both research types and overviews the rationale for their conduct. It imparts information on the most common forms of quantitative and qualitative research, and how they are carried out. These aspects would be helpful for selecting the optimal methodology to use for research based on the researcher’s objectives and topic.
This article also provides information on the strengths and weaknesses of quantitative and qualitative research. Such information would help researchers appreciate the roles and applications of both research types and how to gain from each or their combination. As different research questions require different types of research and analyses, this article is anticipated to assist researchers better recognize the questions answered by quantitative and qualitative research.
Finally, this article would help researchers to have a balanced perspective of qualitative and quantitative research without considering one as superior to the other.
Research can be classified into two general types, quantitative and qualitative. 21 Both types of research entail writing a research question and developing a hypothesis. 22 Quantitative research involves a deductive approach to prove or disprove the hypothesis that was developed, whereas qualitative research involves an inductive approach to create a hypothesis. 23 , 24 , 25 , 26
In quantitative research, the hypothesis is stated before testing. In qualitative research, the hypothesis is developed through inductive reasoning based on the data collected. 27 , 28 For types of data and their analysis, qualitative research usually includes data in the form of words instead of numbers more commonly used in quantitative research. 29
Quantitative research usually includes descriptive, correlational, causal-comparative / quasi-experimental, and experimental research. 21 On the other hand, qualitative research usually encompasses historical, ethnographic, meta-analysis, narrative, grounded theory, phenomenology, case study, and field research. 23 , 25 , 28 , 30 A summary of the features, writing approach, and examples of published articles for each type of qualitative and quantitative research is shown in Table 1 . 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 , 43
Research | Type | Methodology feature | Research writing pointers | Example of published article |
---|---|---|---|---|
Quantitative | Descriptive research | Describes status of identified variable to provide systematic information about phenomenon | Explain how a situation, sample, or variable was examined or observed as it occurred without investigator interference | Östlund AS, Kristofferzon ML, Häggström E, Wadensten B. Primary care nurses’ performance in motivational interviewing: a quantitative descriptive study. 2015;16(1):89. |
Correlational research | Determines and interprets extent of relationship between two or more variables using statistical data | Describe the establishment of reliability and validity, converging evidence, relationships, and predictions based on statistical data | Díaz-García O, Herranz Aguayo I, Fernández de Castro P, Ramos JL. Lifestyles of Spanish elders from supervened SARS-CoV-2 variant onwards: A correlational research on life satisfaction and social-relational praxes. 2022;13:948745. | |
Causal-comparative/Quasi-experimental research | Establishes cause-effect relationships among variables | Write about comparisons of the identified control groups exposed to the treatment variable with unexposed groups | : Sharma MK, Adhikari R. Effect of school water, sanitation, and hygiene on health status among basic level students in Nepal. Environ Health Insights 2022;16:11786302221095030. | |
Uses non-randomly assigned groups where it is not logically feasible to conduct a randomized controlled trial | Provide clear descriptions of the causes determined after making data analyses and conclusions, and known and unknown variables that could potentially affect the outcome | |||
[The study applies a causal-comparative research design] | ||||
: Tuna F, Tunçer B, Can HB, Süt N, Tuna H. Immediate effect of Kinesio taping® on deep cervical flexor endurance: a non-controlled, quasi-experimental pre-post quantitative study. 2022;40(6):528-35. | ||||
Experimental research | Establishes cause-effect relationship among group of variables making up a study using scientific method | Describe how an independent variable was manipulated to determine its effects on dependent variables | Hyun C, Kim K, Lee S, Lee HH, Lee J. Quantitative evaluation of the consciousness level of patients in a vegetative state using virtual reality and an eye-tracking system: a single-case experimental design study. 2022;32(10):2628-45. | |
Explain the random assignments of subjects to experimental treatments | ||||
Qualitative | Historical research | Describes past events, problems, issues, and facts | Write the research based on historical reports | Silva Lima R, Silva MA, de Andrade LS, Mello MA, Goncalves MF. Construction of professional identity in nursing students: qualitative research from the historical-cultural perspective. 2020;28:e3284. |
Ethnographic research | Develops in-depth analytical descriptions of current systems, processes, and phenomena or understandings of shared beliefs and practices of groups or culture | Compose a detailed report of the interpreted data | Gammeltoft TM, Huyền Diệu BT, Kim Dung VT, Đức Anh V, Minh Hiếu L, Thị Ái N. Existential vulnerability: an ethnographic study of everyday lives with diabetes in Vietnam. 2022;29(3):271-88. | |
Meta-analysis | Accumulates experimental and correlational results across independent studies using statistical method | Specify the topic, follow reporting guidelines, describe the inclusion criteria, identify key variables, explain the systematic search of databases, and detail the data extraction | Oeljeklaus L, Schmid HL, Kornfeld Z, Hornberg C, Norra C, Zerbe S, et al. Therapeutic landscapes and psychiatric care facilities: a qualitative meta-analysis. 2022;19(3):1490. | |
Narrative research | Studies an individual and gathers data by collecting stories for constructing a narrative about the individual’s experiences and their meanings | Write an in-depth narration of events or situations focused on the participants | Anderson H, Stocker R, Russell S, Robinson L, Hanratty B, Robinson L, et al. Identity construction in the very old: a qualitative narrative study. 2022;17(12):e0279098. | |
Grounded theory | Engages in inductive ground-up or bottom-up process of generating theory from data | Write the research as a theory and a theoretical model. | Amini R, Shahboulaghi FM, Tabrizi KN, Forouzan AS. Social participation among Iranian community-dwelling older adults: a grounded theory study. 2022;11(6):2311-9. | |
Describe data analysis procedure about theoretical coding for developing hypotheses based on what the participants say | ||||
Phenomenology | Attempts to understand subjects’ perspectives | Write the research report by contextualizing and reporting the subjects’ experiences | Green G, Sharon C, Gendler Y. The communication challenges and strength of nurses’ intensive corona care during the two first pandemic waves: a qualitative descriptive phenomenology study. 2022;10(5):837. | |
Case study | Analyzes collected data by detailed identification of themes and development of narratives written as in-depth study of lessons from case | Write the report as an in-depth study of possible lessons learned from the case | Horton A, Nugus P, Fortin MC, Landsberg D, Cantarovich M, Sandal S. Health system barriers and facilitators to living donor kidney transplantation: a qualitative case study in British Columbia. 2022;10(2):E348-56. | |
Field research | Directly investigates and extensively observes social phenomenon in natural environment without implantation of controls or experimental conditions | Describe the phenomenon under the natural environment over time | Buus N, Moensted M. Collectively learning to talk about personal concerns in a peer-led youth program: a field study of a community of practice. 2022;30(6):e4425-32. | |
Deductive approach.
The deductive approach is used to prove or disprove the hypothesis in quantitative research. 21 , 25 Using this approach, researchers 1) make observations about an unclear or new phenomenon, 2) investigate the current theory surrounding the phenomenon, and 3) hypothesize an explanation for the observations. Afterwards, researchers will 4) predict outcomes based on the hypotheses, 5) formulate a plan to test the prediction, and 6) collect and process the data (or revise the hypothesis if the original hypothesis was false). Finally, researchers will then 7) verify the results, 8) make the final conclusions, and 9) present and disseminate their findings ( Fig. 1A ).
The common types of quantitative research include (a) descriptive, (b) correlational, c) experimental research, and (d) causal-comparative/quasi-experimental. 21
Descriptive research is conducted and written by describing the status of an identified variable to provide systematic information about a phenomenon. A hypothesis is developed and tested after data collection, analysis, and synthesis. This type of research attempts to factually present comparisons and interpretations of findings based on analyses of the characteristics, progression, or relationships of a certain phenomenon by manipulating the employed variables or controlling the involved conditions. 44 Here, the researcher examines, observes, and describes a situation, sample, or variable as it occurs without investigator interference. 31 , 45 To be meaningful, the systematic collection of information requires careful selection of study units by precise measurement of individual variables 21 often expressed as ranges, means, frequencies, and/or percentages. 31 , 45 Descriptive statistical analysis using ANOVA, Student’s t -test, or the Pearson coefficient method has been used to analyze descriptive research data. 46
Correlational research is performed by determining and interpreting the extent of a relationship between two or more variables using statistical data. This involves recognizing data trends and patterns without necessarily proving their causes. The researcher studies only the data, relationships, and distributions of variables in a natural setting, but does not manipulate them. 21 , 45 Afterwards, the researcher establishes reliability and validity, provides converging evidence, describes relationship, and makes predictions. 47
Experimental research is usually referred to as true experimentation. The researcher establishes the cause-effect relationship among a group of variables making up a study using the scientific method or process. This type of research attempts to identify the causal relationships between variables through experiments by arbitrarily controlling the conditions or manipulating the variables used. 44 The scientific manuscript would include an explanation of how the independent variable was manipulated to determine its effects on the dependent variables. The write-up would also describe the random assignments of subjects to experimental treatments. 21
Causal-comparative/quasi-experimental research closely resembles true experimentation but is conducted by establishing the cause-effect relationships among variables. It may also be conducted to establish the cause or consequences of differences that already exist between, or among groups of individuals. 48 This type of research compares outcomes between the intervention groups in which participants are not randomized to their respective interventions because of ethics- or feasibility-related reasons. 49 As in true experiments, the researcher identifies and measures the effects of the independent variable on the dependent variable. However, unlike true experiments, the researchers do not manipulate the independent variable.
In quasi-experimental research, naturally formed or pre-existing groups that are not randomly assigned are used, particularly when an ethical, randomized controlled trial is not feasible or logical. 50 The researcher identifies control groups as those which have been exposed to the treatment variable, and then compares these with the unexposed groups. The causes are determined and described after data analysis, after which conclusions are made. The known and unknown variables that could still affect the outcome are also included. 7
Inductive approach.
Qualitative research involves an inductive approach to develop a hypothesis. 21 , 25 Using this approach, researchers answer research questions and develop new theories, but they do not test hypotheses or previous theories. The researcher seldom examines the effectiveness of an intervention, but rather explores the perceptions, actions, and feelings of participants using interviews, content analysis, observations, or focus groups. 25 , 45 , 51
Qualitative research seeks to elucidate about the lives of people, including their lived experiences, behaviors, attitudes, beliefs, personality characteristics, emotions, and feelings. 27 , 30 It also explores societal, organizational, and cultural issues. 30 This type of research provides a good story mimicking an adventure which results in a “thick” description that puts readers in the research setting. 52
The qualitative research questions are open-ended, evolving, and non-directional. 26 The research design is usually flexible and iterative, commonly employing purposive sampling. The sample size depends on theoretical saturation, and data is collected using in-depth interviews, focus groups, and observations. 27
In various instances, excellent qualitative research may offer insights that quantitative research cannot. Moreover, qualitative research approaches can describe the ‘lived experience’ perspectives of patients, practitioners, and the public. 53 Interestingly, recent developments have looked into the use of technology in shaping qualitative research protocol development, data collection, and analysis phases. 54
Qualitative research employs various techniques, including conversational and discourse analysis, biographies, interviews, case-studies, oral history, surveys, documentary and archival research, audiovisual analysis, and participant observations. 26
To conduct qualitative research, investigators 1) identify a general research question, 2) choose the main methods, sites, and subjects, and 3) determine methods of data documentation access to subjects. Researchers also 4) decide on the various aspects for collecting data (e.g., questions, behaviors to observe, issues to look for in documents, how much (number of questions, interviews, or observations), 5) clarify researchers’ roles, and 6) evaluate the study’s ethical implications in terms of confidentiality and sensitivity. Afterwards, researchers 7) collect data until saturation, 8) interpret data by identifying concepts and theories, and 9) revise the research question if necessary and form hypotheses. In the final stages of the research, investigators 10) collect and verify data to address revisions, 11) complete the conceptual and theoretical framework to finalize their findings, and 12) present and disseminate findings ( Fig. 1B ).
The different types of qualitative research include (a) historical research, (b) ethnographic research, (c) meta-analysis, (d) narrative research, (e) grounded theory, (f) phenomenology, (g) case study, and (h) field research. 23 , 25 , 28 , 30
Historical research is conducted by describing past events, problems, issues, and facts. The researcher gathers data from written or oral descriptions of past events and attempts to recreate the past without interpreting the events and their influence on the present. 6 Data is collected using documents, interviews, and surveys. 55 The researcher analyzes these data by describing the development of events and writes the research based on historical reports. 2
Ethnographic research is performed by observing everyday life details as they naturally unfold. 2 It can also be conducted by developing in-depth analytical descriptions of current systems, processes, and phenomena or by understanding the shared beliefs and practices of a particular group or culture. 21 The researcher collects extensive narrative non-numerical data based on many variables over an extended period, in a natural setting within a specific context. To do this, the researcher uses interviews, observations, and active participation. These data are analyzed by describing and interpreting them and developing themes. A detailed report of the interpreted data is then provided. 2 The researcher immerses himself/herself into the study population and describes the actions, behaviors, and events from the perspective of someone involved in the population. 23 As examples of its application, ethnographic research has helped to understand a cultural model of family and community nursing during the coronavirus disease 2019 outbreak. 56 It has also been used to observe the organization of people’s environment in relation to cardiovascular disease management in order to clarify people’s real expectations during follow-up consultations, possibly contributing to the development of innovative solutions in care practices. 57
Meta-analysis is carried out by accumulating experimental and correlational results across independent studies using a statistical method. 21 The report is written by specifying the topic and meta-analysis type. In the write-up, reporting guidelines are followed, which include description of inclusion criteria and key variables, explanation of the systematic search of databases, and details of data extraction. Meta-analysis offers in-depth data gathering and analysis to achieve deeper inner reflection and phenomenon examination. 58
Narrative research is performed by collecting stories for constructing a narrative about an individual’s experiences and the meanings attributed to them by the individual. 9 It aims to hear the voice of individuals through their account or experiences. 17 The researcher usually conducts interviews and analyzes data by storytelling, content review, and theme development. The report is written as an in-depth narration of events or situations focused on the participants. 2 , 59 Narrative research weaves together sequential events from one or two individuals to create a “thick” description of a cohesive story or narrative. 23 It facilitates understanding of individuals’ lives based on their own actions and interpretations. 60
Grounded theory is conducted by engaging in an inductive ground-up or bottom-up strategy of generating a theory from data. 24 The researcher incorporates deductive reasoning when using constant comparisons. Patterns are detected in observations and then a working hypothesis is created which directs the progression of inquiry. The researcher collects data using interviews and questionnaires. These data are analyzed by coding the data, categorizing themes, and describing implications. The research is written as a theory and theoretical models. 2 In the write-up, the researcher describes the data analysis procedure (i.e., theoretical coding used) for developing hypotheses based on what the participants say. 61 As an example, a qualitative approach has been used to understand the process of skill development of a nurse preceptor in clinical teaching. 62 A researcher can also develop a theory using the grounded theory approach to explain the phenomena of interest by observing a population. 23
Phenomenology is carried out by attempting to understand the subjects’ perspectives. This approach is pertinent in social work research where empathy and perspective are keys to success. 21 Phenomenology studies an individual’s lived experience in the world. 63 The researcher collects data by interviews, observations, and surveys. 16 These data are analyzed by describing experiences, examining meanings, and developing themes. The researcher writes the report by contextualizing and reporting the subjects’ experience. This research approach describes and explains an event or phenomenon from the perspective of those who have experienced it. 23 Phenomenology understands the participants’ experiences as conditioned by their worldviews. 52 It is suitable for a deeper understanding of non-measurable aspects related to the meanings and senses attributed by individuals’ lived experiences. 60
Case study is conducted by collecting data through interviews, observations, document content examination, and physical inspections. The researcher analyzes the data through a detailed identification of themes and the development of narratives. The report is written as an in-depth study of possible lessons learned from the case. 2
Field research is performed using a group of methodologies for undertaking qualitative inquiries. The researcher goes directly to the social phenomenon being studied and observes it extensively. In the write-up, the researcher describes the phenomenon under the natural environment over time with no implantation of controls or experimental conditions. 45
Scientific researchers must be aware of the differences between quantitative and qualitative research in terms of their working mechanisms to better understand their specific applications. This knowledge will be of significant benefit to researchers, especially during the planning process, to ensure that the appropriate type of research is undertaken to fulfill the research aims.
In terms of quantitative research data evaluation, four well-established criteria are used: internal validity, external validity, reliability, and objectivity. 23 The respective correlating concepts in qualitative research data evaluation are credibility, transferability, dependability, and confirmability. 30 Regarding write-up, quantitative research papers are usually shorter than their qualitative counterparts, which allows the latter to pursue a deeper understanding and thus producing the so-called “thick” description. 29
Interestingly, a major characteristic of qualitative research is that the research process is reversible and the research methods can be modified. This is in contrast to quantitative research in which hypothesis setting and testing take place unidirectionally. This means that in qualitative research, the research topic and question may change during literature analysis, and that the theoretical and analytical methods could be altered during data collection. 44
Quantitative research focuses on natural, quantitative, and objective phenomena, whereas qualitative research focuses on social, qualitative, and subjective phenomena. 26 Quantitative research answers the questions “what?” and “when?,” whereas qualitative research answers the questions “why?,” “how?,” and “how come?.” 64
Perhaps the most important distinction between quantitative and qualitative research lies in the nature of the data being investigated and analyzed. Quantitative research focuses on statistical, numerical, and quantitative aspects of phenomena, and employ the same data collection and analysis, whereas qualitative research focuses on the humanistic, descriptive, and qualitative aspects of phenomena. 26 , 28
The aims and types of inquiries determine the difference between quantitative and qualitative research. In quantitative research, statistical data and a structured process are usually employed by the researcher. Quantitative research usually suggests quantities (i.e., numbers). 65 On the other hand, researchers typically use opinions, reasons, verbal statements, and an unstructured process in qualitative research. 63 Qualitative research is more related to quality or kind. 65
In quantitative research, the researcher employs a structured process for collecting quantifiable data. Often, a close-ended questionnaire is used wherein the response categories for each question are designed in which values can be assigned and analyzed quantitatively using a common scale. 66 Quantitative research data is processed consecutively from data management, then data analysis, and finally to data interpretation. Data should be free from errors and missing values. In data management, variables are defined and coded. In data analysis, statistics (e.g., descriptive, inferential) as well as central tendency (i.e., mean, median, mode), spread (standard deviation), and parameter estimation (confidence intervals) measures are used. 67
In qualitative research, the researcher uses an unstructured process for collecting data. These non-statistical data may be in the form of statements, stories, or long explanations. Various responses according to respondents may not be easily quantified using a common scale. 66
Composing a qualitative research paper resembles writing a quantitative research paper. Both papers consist of a title, an abstract, an introduction, objectives, methods, findings, and discussion. However, a qualitative research paper is less regimented than a quantitative research paper. 27
Quantitative research can be considered as a hypothesis-testing design as it involves quantification, statistics, and explanations. It flows from theory to data (i.e., deductive), focuses on objective data, and applies theories to address problems. 45 , 68 It collects numerical or statistical data; answers questions such as how many, how often, how much; uses questionnaires, structured interview schedules, or surveys 55 as data collection tools; analyzes quantitative data in terms of percentages, frequencies, statistical comparisons, graphs, and tables showing statistical values; and reports the final findings in the form of statistical information. 66 It uses variable-based models from individual cases and findings are stated in quantified sentences derived by deductive reasoning. 24
In quantitative research, a phenomenon is investigated in terms of the relationship between an independent variable and a dependent variable which are numerically measurable. The research objective is to statistically test whether the hypothesized relationship is true. 68 Here, the researcher studies what others have performed, examines current theories of the phenomenon being investigated, and then tests hypotheses that emerge from those theories. 4
Quantitative hypothesis-testing research has certain limitations. These limitations include (a) problems with selection of meaningful independent and dependent variables, (b) the inability to reflect subjective experiences as variables since variables are usually defined numerically, and (c) the need to state a hypothesis before the investigation starts. 61
Qualitative research can be considered as a hypothesis-generating design since it involves understanding and descriptions in terms of context. It flows from data to theory (i.e., inductive), focuses on observation, and examines what happens in specific situations with the aim of developing new theories based on the situation. 45 , 68 This type of research (a) collects qualitative data (e.g., ideas, statements, reasons, characteristics, qualities), (b) answers questions such as what, why, and how, (c) uses interviews, observations, or focused-group discussions as data collection tools, (d) analyzes data by discovering patterns of changes, causal relationships, or themes in the data; and (e) reports the final findings as descriptive information. 61 Qualitative research favors case-based models from individual characteristics, and findings are stated using context-dependent existential sentences that are justifiable by inductive reasoning. 24
In qualitative research, texts and interviews are analyzed and interpreted to discover meaningful patterns characteristic of a particular phenomenon. 61 Here, the researcher starts with a set of observations and then moves from particular experiences to a more general set of propositions about those experiences. 4
Qualitative hypothesis-generating research involves collecting interview data from study participants regarding a phenomenon of interest, and then using what they say to develop hypotheses. It involves the process of questioning more than obtaining measurements; it generates hypotheses using theoretical coding. 61 When using large interview teams, the key to promoting high-level qualitative research and cohesion in large team methods and successful research outcomes is the balance between autonomy and collaboration. 69
Qualitative data may also include observed behavior, participant observation, media accounts, and cultural artifacts. 61 Focus group interviews are usually conducted, audiotaped or videotaped, and transcribed. Afterwards, the transcript is analyzed by several researchers.
Qualitative research also involves scientific narratives and the analysis and interpretation of textual or numerical data (or both), mostly from conversations and discussions. Such approach uncovers meaningful patterns that describe a particular phenomenon. 2 Thus, qualitative research requires skills in grasping and contextualizing data, as well as communicating data analysis and results in a scientific manner. The reflective process of the inquiry underscores the strengths of a qualitative research approach. 2
When both quantitative and qualitative research methods are used in the same research, mixed-method research is applied. 25 This combination provides a complete view of the research problem and achieves triangulation to corroborate findings, complementarity to clarify results, expansion to extend the study’s breadth, and explanation to elucidate unexpected results. 29
Moreover, quantitative and qualitative findings are integrated to address the weakness of both research methods 29 , 66 and to have a more comprehensive understanding of the phenomenon spectrum. 66
For data analysis in mixed-method research, real non-quantitized qualitative data and quantitative data must both be analyzed. 70 The data obtained from quantitative analysis can be further expanded and deepened by qualitative analysis. 23
In terms of assessment criteria, Hammersley 71 opined that qualitative and quantitative findings should be judged using the same standards of validity and value-relevance. Both approaches can be mutually supportive. 52
Quantitative and qualitative research must be carefully studied and conducted by scientific researchers to avoid unethical research and inadequate outcomes. Quantitative research involves a deductive process wherein a research question is answered with a hypothesis that describes the relationship between independent and dependent variables, and the testing of the hypothesis. This investigation can be aptly termed as hypothesis-testing research involving the analysis of hypothesis-driven experimental studies resulting in a test of significance. Qualitative research involves an inductive process wherein a research question is explored to generate a hypothesis, which then leads to the development of a theory. This investigation can be aptly termed as hypothesis-generating research. When the whole spectrum of inductive and deductive research approaches is combined using both quantitative and qualitative research methodologies, mixed-method research is applied, and this can facilitate the construction of novel hypotheses, development of theories, or refinement of concepts.
Disclosure: The authors have no potential conflicts of interest to disclose.
Author Contributions:
Quantitative methods emphasize objective measurements and the statistical, mathematical, or numerical analysis of data collected through polls, questionnaires, and surveys, or by manipulating pre-existing statistical data using computational techniques . Quantitative research focuses on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.
Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Muijs, Daniel. Doing Quantitative Research in Education with SPSS . 2nd edition. London: SAGE Publications, 2010.
Resources for locating data and statistics can be found here:
Statistics & Data Research Guide
Your goal in conducting quantitative research study is to determine the relationship between one thing [an independent variable] and another [a dependent or outcome variable] within a population. Quantitative research designs are either descriptive [subjects usually measured once] or experimental [subjects measured before and after a treatment]. A descriptive study establishes only associations between variables; an experimental study establishes causality.
Quantitative research deals in numbers, logic, and an objective stance. Quantitative research focuses on numeric and unchanging data and detailed, convergent reasoning rather than divergent reasoning [i.e., the generation of a variety of ideas about a research problem in a spontaneous, free-flowing manner].
Its main characteristics are :
The overarching aim of a quantitative research study is to classify features, count them, and construct statistical models in an attempt to explain what is observed.
Things to keep in mind when reporting the results of a study using quantitative methods :
NOTE: When using pre-existing statistical data gathered and made available by anyone other than yourself [e.g., government agency], you still must report on the methods that were used to gather the data and describe any missing data that exists and, if there is any, provide a clear explanation why the missing data does not undermine the validity of your final analysis.
Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Brians, Craig Leonard et al. Empirical Political Analysis: Quantitative and Qualitative Research Methods . 8th ed. Boston, MA: Longman, 2011; McNabb, David E. Research Methods in Public Administration and Nonprofit Management: Quantitative and Qualitative Approaches . 2nd ed. Armonk, NY: M.E. Sharpe, 2008; Quantitative Research Methods. Writing@CSU. Colorado State University; Singh, Kultar. Quantitative Social Research Methods . Los Angeles, CA: Sage, 2007.
Before designing a quantitative research study, you must decide whether it will be descriptive or experimental because this will dictate how you gather, analyze, and interpret the results. A descriptive study is governed by the following rules: subjects are generally measured once; the intention is to only establish associations between variables; and, the study may include a sample population of hundreds or thousands of subjects to ensure that a valid estimate of a generalized relationship between variables has been obtained. An experimental design includes subjects measured before and after a particular treatment, the sample population may be very small and purposefully chosen, and it is intended to establish causality between variables. Introduction The introduction to a quantitative study is usually written in the present tense and from the third person point of view. It covers the following information:
Methodology The methods section of a quantitative study should describe how each objective of your study will be achieved. Be sure to provide enough detail to enable the reader can make an informed assessment of the methods being used to obtain results associated with the research problem. The methods section should be presented in the past tense.
Results The finding of your study should be written objectively and in a succinct and precise format. In quantitative studies, it is common to use graphs, tables, charts, and other non-textual elements to help the reader understand the data. Make sure that non-textual elements do not stand in isolation from the text but are being used to supplement the overall description of the results and to help clarify key points being made. Further information about how to effectively present data using charts and graphs can be found here .
Discussion Discussions should be analytic, logical, and comprehensive. The discussion should meld together your findings in relation to those identified in the literature review, and placed within the context of the theoretical framework underpinning the study. The discussion should be presented in the present tense.
Conclusion End your study by to summarizing the topic and provide a final comment and assessment of the study.
Black, Thomas R. Doing Quantitative Research in the Social Sciences: An Integrated Approach to Research Design, Measurement and Statistics . London: Sage, 1999; Gay,L. R. and Peter Airasain. Educational Research: Competencies for Analysis and Applications . 7th edition. Upper Saddle River, NJ: Merril Prentice Hall, 2003; Hector, Anestine. An Overview of Quantitative Research in Composition and TESOL . Department of English, Indiana University of Pennsylvania; Hopkins, Will G. “Quantitative Research Design.” Sportscience 4, 1 (2000); "A Strategy for Writing Up Research Results. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper." Department of Biology. Bates College; Nenty, H. Johnson. "Writing a Quantitative Research Thesis." International Journal of Educational Science 1 (2009): 19-32; Ouyang, Ronghua (John). Basic Inquiry of Quantitative Research . Kennesaw State University.
Quantitative researchers try to recognize and isolate specific variables contained within the study framework, seek correlation, relationships and causality, and attempt to control the environment in which the data is collected to avoid the risk of variables, other than the one being studied, accounting for the relationships identified.
Among the specific strengths of using quantitative methods to study social science research problems:
Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Brians, Craig Leonard et al. Empirical Political Analysis: Quantitative and Qualitative Research Methods . 8th ed. Boston, MA: Longman, 2011; McNabb, David E. Research Methods in Public Administration and Nonprofit Management: Quantitative and Qualitative Approaches . 2nd ed. Armonk, NY: M.E. Sharpe, 2008; Singh, Kultar. Quantitative Social Research Methods . Los Angeles, CA: Sage, 2007.
Quantitative methods presume to have an objective approach to studying research problems, where data is controlled and measured, to address the accumulation of facts, and to determine the causes of behavior. As a consequence, the results of quantitative research may be statistically significant but are often humanly insignificant.
Some specific limitations associated with using quantitative methods to study research problems in the social sciences include:
Finding Examples of How to Apply Different Types of Research Methods
SAGE publications is a major publisher of studies about how to design and conduct research in the social and behavioral sciences. Their SAGE Research Methods Online and Cases database includes contents from books, articles, encyclopedias, handbooks, and videos covering social science research design and methods including the complete Little Green Book Series of Quantitative Applications in the Social Sciences and the Little Blue Book Series of Qualitative Research techniques. The database also includes case studies outlining the research methods used in real research projects. This is an excellent source for finding definitions of key terms and descriptions of research design and practice, techniques of data gathering, analysis, and reporting, and information about theories of research [e.g., grounded theory]. The database covers both qualitative and quantitative research methods as well as mixed methods approaches to conducting research.
SAGE Research Methods Online and Cases
Discover the world's research
5195 Accesses
4 Citations
Quantitative research methods are concerned with the planning, design, and implementation of strategies to collect and analyze data. Descartes, the seventeenth-century philosopher, suggested that how the results are achieved is often more important than the results themselves, as the journey taken along the research path is a journey of discovery. High-quality quantitative research is characterized by the attention given to the methods and the reliability of the tools used to collect the data. The ability to critique research in a systematic way is an essential component of a health professional’s role in order to deliver high quality, evidence-based healthcare. This chapter is intended to provide a simple overview of the way new researchers and health practitioners can understand and employ quantitative methods. The chapter offers practical, realistic guidance in a learner-friendly way and uses a logical sequence to understand the process of hypothesis development, study design, data collection and handling, and finally data analysis and interpretation.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
Babbie ER. The practice of social research. 14th ed. Belmont: Wadsworth Cengage; 2016.
Google Scholar
Descartes. Cited in Halverston, W. (1976). In: A concise introduction to philosophy, 3rd ed. New York: Random House; 1637.
Doll R, Hill AB. The mortality of doctors in relation to their smoking habits. BMJ. 1954;328(7455):1529–33. https://doi.org/10.1136/bmj.328.7455.1529 .
Article Google Scholar
Liamputtong P. Research methods in health: foundations for evidence-based practice. 3rd ed. Melbourne: Oxford University Press; 2017.
McNabb DE. Research methods in public administration and nonprofit management: quantitative and qualitative approaches. 2nd ed. New York: Armonk; 2007.
Merriam-Webster. Dictionary. http://www.merriam-webster.com . Accessed 20th December 2017.
Olesen Larsen P, von Ins M. The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index. Scientometrics. 2010;84(3):575–603.
Pannucci CJ, Wilkins EG. Identifying and avoiding bias in research. Plast Reconstr Surg. 2010;126(2):619–25. https://doi.org/10.1097/PRS.0b013e3181de24bc .
Petrie A, Sabin C. Medical statistics at a glance. 2nd ed. London: Blackwell Publishing; 2005.
Portney LG, Watkins MP. Foundations of clinical research: applications to practice. 3rd ed. New Jersey: Pearson Publishing; 2009.
Sheehan J. Aspects of research methodology. Nurse Educ Today. 1986;6:193–203.
Wilson LA, Black DA. Health, science research and research methods. Sydney: McGraw Hill; 2013.
Download references
Authors and affiliations.
School of Science and Health, Western Sydney University, Penrith, NSW, Australia
Leigh A. Wilson
Faculty of Health Science, Discipline of Behavioural and Social Sciences in Health, University of Sydney, Lidcombe, NSW, Australia
You can also search for this author in PubMed Google Scholar
Correspondence to Leigh A. Wilson .
Editors and affiliations.
Pranee Liamputtong
Reprints and permissions
© 2019 Springer Nature Singapore Pte Ltd.
Cite this entry.
Wilson, L.A. (2019). Quantitative Research. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_54
DOI : https://doi.org/10.1007/978-981-10-5251-4_54
Published : 13 January 2019
Publisher Name : Springer, Singapore
Print ISBN : 978-981-10-5250-7
Online ISBN : 978-981-10-5251-4
eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Policies and ethics
11 min read
Does the thought of quantitative data analysis bring back the horrors of math classes? We get it.
But conducting quantitative data analysis doesn’t have to be hard with the right tools. Want to learn how to turn raw numbers into actionable insights on how to improve your product?
In this article, we explore what quantitative data analysis is, the difference between quantitative and qualitative data analysis, and statistical methods you can apply to your data. We also walk you through the steps you can follow to analyze quantitative information, and how Userpilot can help you streamline the product analytics process. Let’s get started.
Quantitative data analysis is about applying statistical analysis methods to define, summarize, and contextualize numerical data. In short, it’s about turning raw numbers and data into actionable insights.
The analysis will vary depending on the research questions and the collected data (more on this below).
The main difference between these forms of analysis lies in the collected data. Quantitative data is numerical or easily quantifiable. For example, the answers to a customer satisfaction score (CSAT) survey are quantitative since you can count the number of people who answered “very satisfied”.
Qualitative feedback , on the other hand, analyzes information that requires interpretation. For instance, evaluating graphics, videos, text-based answers, or impressions.
Another difference between quantitative and qualitative analysis is the questions each seeks to answer. For instance, quantitative data analysis primarily answers what happened, when it happened, and where it happened. However, qualitative data analysis answers why and how an event occurred.
Quantitative data analysis also looks into identifying patterns , drivers, and metrics for different groups. However, qualitative analysis digs deeper into the sample dataset to understand underlying motivations and thinking processes.
Quantitative or data-driven analysis has advantages such as:
These are common disadvantages of data-driven analytics :
There are two statistical methods for reviewing quantitative data and user analytics . However, before exploring these in-depth, let’s refresh these key concepts:
Here are methods for analyzing quantitative data:
Descriptive statistics, as the name implies, describe your data and help you understand your sample in more depth. It doesn’t make inferences about the entire population but only focuses on the details of your specific sample.
Descriptive statistics usually include measures like the mean, median, percentage, frequency, skewness, and mode.
Inferential statistics aim to make predictions and test hypotheses about the real-world population based on your sample data.
Here, you can use methods such as a T-test, ANOVA, regression analysis, and correlation analysis.
Let’s take a look at this example. Through descriptive statistics, you identify that users under the age of 25 are more likely to skip your onboarding. You’ll need to apply inferential statistics to determine if the result is statistically significant and applicable to your entire ’25 or younger’ population.
The type of data that you collect and the research questions that you want to answer will impact which quantitative data analysis method you choose. Here’s how to choose the right method:
Before choosing the quantitative data analysis method, you need to identify which group your data belongs to:
Applying any statistical method to all data types can lead to meaningless results. Instead, identify which statistical analysis method supports your collected data types.
The specific research questions you want to answer, and your hypothesis (if you have one) impact the analysis method you choose. This is because they define the type of data you’ll collect and the relationships you’re investigating.
For instance, if you want to understand sample specifics, descriptive statistics—such as tracking NPS —will work. However, if you want to determine if other variables affect the NPS, you’ll need to conduct an inferential analysis.
The overarching questions vary in both of the previous examples. For calculating the NPS, your internal research question might be, “Where do we stand in customer loyalty ?” However, if you’re doing inferential analysis, you may ask, “How do various factors, such as demographics, affect NPS?”
Here’s how to conduct quantitative analysis and extract customer insights :
Before diving into data collection, you need to define clear goals for your analysis as these will guide the process. This is because your objectives determine what to look for and where to find data. These goals should also come with key performance indicators (KPIs) to determine how you’ll measure success.
For example, imagine your goal is to increase user engagement. So, relevant KPIs include product engagement score , feature usage rate , user retention rate, or other relevant product engagement metrics .
Once you’ve defined your goals, you need to gather the data you’ll analyze. Quantitative data can come from multiple sources, including user surveys such as NPS, CSAT, and CES, website and application analytics , transaction records, and studies or whitepapers.
Remember: This data should help you reach your goals. So, if you want to increase user engagement , you may need to gather data from a mix of sources.
For instance, product analytics tools can provide insights into how users interact with your tool, click on buttons, or change text. Surveys, on the other hand, can capture user satisfaction levels . Collecting a broad range of data makes your analysis more robust and comprehensive.
Raw data is often messy and contains duplicates, outliers, or missing values that can skew your analysis. Before making any calculations, clean the data by removing these anomalies or outliers to ensure accurate results.
Once cleaned, turn it into visual data by using different types of charts , graphs, or heatmaps . Visualizations and data analytics charts make it easier to spot trends, patterns, and anomalies. If you’re using Userpilot, you can choose your preferred visualizations and organize your dashboard to your liking.
When looking at your dashboards, identify recurring themes, unusual spikes, or consistent declines that might indicate data analytics trends or potential issues.
Picture this: You notice a consistent increase in feature usage whenever you run seasonal marketing campaigns . So, you segment the data based on different promotional strategies. There, you discover that users exposed to email marketing campaigns have a 30% higher engagement rate than those reached through social media ads.
In this example, the pattern suggests that email promotions are more effective in driving feature usage.
If you’re a Userpilot user, you can conduct a trend analysis by tracking how your users perform certain events.
Once you’ve discovered meaningful insights, you have to communicate them to your organization’s key stakeholders. Do this by turning your data into a shareable analysis report , one-pager, presentation, or email with clear and actionable next steps.
Your goal at this stage is for others to view and understand the data easily so they can use the insights to make data-led decisions.
Following the previous example, let’s say you’ve found that email campaigns significantly boost feature usage. Your email to other stakeholders should strongly recommend increasing the frequency of these campaigns and adding the supporting data points.
Take a look at how easy it is to share custom dashboards you built in Userpilot with others via email:
Data analysis is only valuable if it leads to actionable steps that improve your product or service. So, make sure to act upon insights by assigning tasks to the right persons.
For example, after analyzing user onboarding data, you may find that users who completed the onboarding checklist were 3x more likely to become paying customers ( like Sked Social did! ).
Now that you have actual data on the checklist’s impact on conversions, you can work on improving it, such as simplifying its steps, adding interactive features, and launching an A/B test to experiment with different versions.
As you’ve seen throughout this article, using a product analytics tool can simplify your data analysis and help you get insights faster. Here are different ways in which Userpilot can help:
Thanks to Userpilot’s new auto-capture feature, you can automatically track every time your users click, write a text, or fill out a form in your app—no engineers or manual tagging required!
Our customer analytics platform lets you use this data to build segments, trigger personalized in-app events and experiences, or launch surveys.
If you don’t want to auto-capture raw data, you can turn this functionality off in your settings, as seen below:
Userpilot comes with template analytics dashboards , such as new user activation dashboards or customer engagement dashboards . However, you can create custom dashboards and reports to keep track of metrics that are relevant to your business in real time.
For instance, you could build a customer retention analytics dashboard and include all metrics that you find relevant, such as customer stickiness , NPS, or last accessed date.
Userpilot lets you conduct A/B and multivariate tests , either by following a controlled or a head-to-head approach. You can track the results on a dashboard.
For example, let’s say you want to test a variation of your onboarding flow to determine which leads to higher user activation .
You can go to Userpilot’s Flows tab and click on Experiments. There, you’ll be able to select the type of test you want to run, for instance, a controlled A/B test , build a new flow, test it, and get the results.
With Userpilot, you can track your customers’ journey as they complete actions and move through the funnel. Funnel analytics give you insights into your conversion rates and conversion times between two events, helping you identify areas for improvement.
Imagine you want to analyze your free-to-paid conversions and the differences between devices. Just by looking at the graphic, you can draw some insights:
Another Userpilot functionality that can help you analyze quantitative data is cohort analysis . This powerful tool lets you group users based on shared characteristics or experiences, allowing you to analyze their behavior over time and identify trends, patterns, and the long-term impact of changes on user behavior.
For example, let’s say you recently released a feature and want to measure its impact on user retention. Via a cohort analysis, you can group users who started using your product after the update and compare their retention rates to previous cohorts.
You can do this in Userpilot by creating segments and then tracking user segments ‘ retention rates over time.
In Userpilot, you can use retention tables to stay on top of feature adoption . This means you can track how many users continue to use a feature over time and which features are most valuable to your users. The video below shows how to choose the features or events you want to analyze in Userpilot.
As you’ve seen, to conduct quantitative analysis, you first need to identify your business and research goals. Then, collect, clean, and visualize the data to spot trends and patterns. Lastly, analyze the data, share it with stakeholders, and act upon insights to build better products and drive customer satisfaction.
To stay on top of your KPIs, you need a product analytics tool. With Userpilot, you can automate data capture, analyze product analytics, and view results in shareable dashboards. Want to try it for yourself? Get a demo .
Save my name, email, and website in this browser for the next time I comment.
The fastest way to learn about Product Growth,Management & Trends.
The coolest way to learn about Product Growth, Management & Trends. Delivered fresh to your inbox, weekly.
The fastest way to learn about Product Growth, Management & Trends.
Heap autocapture: an in-depth review + a better alternative.
Aazar Ali Shad
Run a free plagiarism check in 10 minutes, automatically generate references for free.
Published on 4 April 2022 by Pritha Bhandari . Revised on 10 October 2022.
Quantitative research is the process of collecting and analysing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalise results to wider populations.
Quantitative research is the opposite of qualitative research , which involves collecting and analysing non-numerical data (e.g. text, video, or audio).
Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc.
Quantitative research methods, quantitative data analysis, advantages of quantitative research, disadvantages of quantitative research, frequently asked questions about quantitative research.
You can use quantitative research methods for descriptive, correlational or experimental research.
Correlational and experimental research can both be used to formally test hypotheses , or predictions, using statistics. The results may be generalised to broader populations based on the sampling method used.
To collect quantitative data, you will often need to use operational definitions that translate abstract concepts (e.g., mood) into observable and quantifiable measures (e.g., self-ratings of feelings and energy levels).
Research method | How to use | Example |
---|---|---|
Control or manipulate an to measure its effect on a dependent variable. | To test whether an intervention can reduce procrastination in college students, you give equal-sized groups either a procrastination intervention or a comparable task. You compare self-ratings of procrastination behaviors between the groups after the intervention. | |
Ask questions of a group of people in-person, over-the-phone or online. | You distribute with rating scales to first-year international college students to investigate their experiences of culture shock. | |
(Systematic) observation | Identify a behavior or occurrence of interest and monitor it in its natural setting. | To study college classroom participation, you sit in on classes to observe them, counting and recording the prevalence of active and passive behaviors by students from different backgrounds. |
Secondary research | Collect data that has been gathered for other purposes e.g., national surveys or historical records. | To assess whether attitudes towards climate change have changed since the 1980s, you collect relevant questionnaire data from widely available . |
Once data is collected, you may need to process it before it can be analysed. For example, survey and test data may need to be transformed from words to numbers. Then, you can use statistical analysis to answer your research questions .
Descriptive statistics will give you a summary of your data and include measures of averages and variability. You can also use graphs, scatter plots and frequency tables to visualise your data and check for any trends or outliers.
Using inferential statistics , you can make predictions or generalisations based on your data. You can test your hypothesis or use your sample data to estimate the population parameter .
You can also assess the reliability and validity of your data collection methods to indicate how consistently and accurately your methods actually measured what you wanted them to.
Quantitative research is often used to standardise data collection and generalise findings . Strengths of this approach include:
Repeating the study is possible because of standardised data collection protocols and tangible definitions of abstract concepts.
The study can be reproduced in other cultural settings, times or with different groups of participants. Results can be compared statistically.
Data from large samples can be processed and analysed using reliable and consistent procedures through quantitative data analysis.
Using formalised and established hypothesis testing procedures means that you have to carefully consider and report your research variables, predictions, data collection and testing methods before coming to a conclusion.
Despite the benefits of quantitative research, it is sometimes inadequate in explaining complex research topics. Its limitations include:
Using precise and restrictive operational definitions may inadequately represent complex concepts. For example, the concept of mood may be represented with just a number in quantitative research, but explained with elaboration in qualitative research.
Predetermined variables and measurement procedures can mean that you ignore other relevant observations.
Despite standardised procedures, structural biases can still affect quantitative research. Missing data , imprecise measurements or inappropriate sampling methods are biases that can lead to the wrong conclusions.
Quantitative research often uses unnatural settings like laboratories or fails to consider historical and cultural contexts that may affect data collection and results.
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.
Operationalisation means turning abstract conceptual ideas into measurable observations.
For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.
Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.
Reliability and validity are both about how well a method measures something:
If you are doing experimental research , you also have to consider the internal and external validity of your experiment.
Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.
Bhandari, P. (2022, October 10). What Is Quantitative Research? | Definition & Methods. Scribbr. Retrieved 26 August 2024, from https://www.scribbr.co.uk/research-methods/introduction-to-quantitative-research/
Published on 26.8.2024 in Vol 26 (2024)
Authors of this article:
1 Institute of Molecular Immunology, Klinikum Rechts der Isar, TUM School of Medicine and Health, Technical University of Munich, Munich, Germany
2 Institute of History and Ethics in Medicine, TUM School of Medicine and Health, Technical University of Munich, Munich, Germany
3 Department of Science, Technology and Society (STS), School of Social Sciences and Technology, Technical University of Munich, Munich, Germany
4 Institute of Philosophy, Multidisciplinary Center for Infectious Diseases, University of Bern, Bern, Switzerland
5 Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
*these authors contributed equally
Bettina M Zimmermann, PhD
Institute of History and Ethics in Medicine
TUM School of Medicine and Health
Technical University of Munich
Ismaninger Str. 22
Munich, 81675
Phone: 49 89 4140 4041
Email: [email protected]
Background: Social media platforms are increasingly used to recruit patients for clinical studies. Yet, patients’ attitudes regarding social media recruitment are underexplored.
Objective: This mixed methods study aims to assess predictors of the acceptance of social media recruitment among patients with hepatitis B, a patient population that is considered particularly vulnerable in this context.
Methods: Using a mixed methods approach, the hypotheses for our survey were developed based on a qualitative interview study with 6 patients with hepatitis B and 30 multidisciplinary experts. Thematic analysis was applied to qualitative interview analysis. For the cross-sectional survey, we additionally recruited 195 patients with hepatitis B from 3 clinical centers in Germany. Adult patients capable of judgment with a hepatitis B diagnosis who understood German and visited 1 of the 3 study centers during the data collection period were eligible to participate. Data analysis was conducted using SPSS (version 28; IBM Corp), including descriptive statistics and regression analysis.
Results: On the basis of the qualitative interview analysis, we hypothesized that 6 factors were associated with acceptance of social media recruitment: using social media in the context of hepatitis B (hypothesis 1), digital literacy (hypothesis 2), interest in clinical studies (hypothesis 3), trust in nonmedical (hypothesis 4a) and medical (hypothesis 4b) information sources, perceiving the hepatitis B diagnosis as a secret (hypothesis 5a), attitudes toward data privacy in the social media context (hypothesis 5b), and perceived stigma (hypothesis 6). Regression analysis revealed that the higher the social media use for hepatitis B (hypothesis 1), the higher the interest in clinical studies (hypothesis 3), the more trust in nonmedical information sources (hypothesis 4a), and the less secrecy around a hepatitis B diagnosis (hypothesis 5a), the higher the acceptance of social media as a recruitment tool for clinical hepatitis B studies.
Conclusions: This mixed methods study provides the first quantitative insights into social media acceptance for clinical study recruitment among patients with hepatitis B. The study was limited to patients with hepatitis B in Germany but sets out to be a reference point for future studies assessing the attitudes toward and acceptance of social media recruitment for clinical studies. Such empirical inquiries can facilitate the work of researchers designing clinical studies as well as ethics review boards in balancing the risks and benefits of social media recruitment in a context-specific manner.
Benefits and risks of using social media recruitment for clinical studies.
Recruiting clinical study participants through social media has the potential to increase the recruitment accrual in a cost-effective way [ 1 ]. Consequently, social media recruitment has been increasingly applied for clinical studies, often in parallel with other recruitment strategies. However, social media recruitment still bears a host of challenges. First, maintaining a social media presence and community management can be resource intensive. Second, when used as a stand-alone recruiting method, it might yield a cohort of limited demographic representativeness. Finally, social media recruitment comes with ethical issues, particularly when used to recruit for clinical studies [ 2 ]. Because social media recruitment includes reaching potential research participants outside a clinical setting and in a public online space without direct personal contact, risks related to social stigma, privacy infringement, loss of trust, and psychological harm have been discussed [ 3 ]. To mitigate some of these risks, prioritizing investigator transparency and obtaining explicit consent when recruiting from others’ social network was suggested [ 4 ]. Yet, because the activities of social media platforms are primarily unregulated and partly belong to large global technology companies, activities conducted on social media, including study recruitment, can never be fully controlled by researchers or institutions. Remaining privacy-infringing risks include hidden data collection and profiling, particularly problematic for patients carrying vulnerable characteristics [ 5 ].
Early studies assessing social media recruitment for clinical studies focused on the effectiveness of the method. For example, Frandsen et al [ 3 ] used social media recruitment for a smoking cessation trial and compared their cohort recruited from a Facebook-based approach to cohorts resulting from other recruitment methods. They found no differences between the cohorts regarding socioeconomic or smoking characteristics, except that participants recruited via Facebook were significantly younger. Wisk et al [ 4 ] recruited college students with type 1 diabetes, a hard-to-reach population, using a variety of outreach channels, including social media. They found that Facebook was the most successful recruitment method. Guthrie et al [ 5 ] found that Facebook advertising was significantly cheaper than recruiting via mail. While these studies allow insights into the utility of social media recruitment from the perspective of researchers, studies assessing patients’ perspectives and attitudes toward social media for clinical study recruitment are lacking. This study aims to deliver first evidence on patient attitudes toward social media recruitment, focusing on patients with hepatitis B.
Patients with hepatitis B are a particularly interesting cohort to study acceptance for social media recruitment as the particularities of the disease exhibit potentially confounding factors for their attitudes toward social media recruitment. First, there is robust empirical evidence that patients with hepatitis B can be subject to social stigma [ 6 - 10 ]. Therefore, the risk of public exposure to hepatitis B diagnosis on social media renders them—and patients with other stigmatized traits and conditions—particularly vulnerable in the context of social media recruitment [ 11 ]. Second, hepatitis B in Europe is particularly prevalent in certain immigrant populations, which are at risk of being neglected for clinical studies due to language barriers and lack of health care access. Social media recruitment can help include patient populations who otherwise would be disregarded for clinical studies or are hard to reach [ 12 - 14 ].
However, the effectiveness of social media recruitment crucially hinges on technology acceptance. To date, the attitudes of patients regarding social media recruitment are underexplored. Addressing this gap, this mixed methods study assesses factors predicting the acceptance of social media recruitment among patients with hepatitis B. On the basis of qualitative individual interviews with 6 patients with hepatitis B and 30 multidisciplinary experts and a literature review, we hypothesized that general social media use (hypothesis 1), social media literacy (hypothesis 2), interest in clinical studies (hypothesis 3), trust (hypothesis 4), privacy needs (hypothesis 5), and perceived stigma (hypothesis 6) are associated with acceptance of social media recruitment.
This study is part of the European Union–funded international research consortium “TherVacB—A Therapeutic Vaccine to Cure Hepatitis B,” work package 6 (ethical, legal, and social aspects of social media recruitment). Using a mixed methods design, we first conducted an explorative qualitative multistakeholder interview study assessing the ethical, legal, social, and practical implications of social media recruitment for clinical studies [ 2 ]. The hypotheses investigated in this paper are based on these interviews and a conceptual literature review mapping the ethical implications of social media recruitment [ 11 ]. The reporting of this study followed the Strengthening the Reporting of Observational Studies in Epidemiology guidelines [ 15 ].
On the basis of preliminary statistical power analysis and pragmatic considerations of available study participants, we aimed for 200 responses in a recruitment period of 7 months. Due to administrative constraints, including the COVID-19 pandemic, the overall recruitment period was prolonged by 5 months (total recruitment period 12 months, June 4, 2022, to May 31, 2023), and the recruitment period varied among the recruiting clinics ( Multimedia Appendix 1 ).
Adult, German-speaking patients diagnosed with acute or chronic viral hepatitis B were recruited from 3 large university hospitals in Germany. We chose such a venue-based recruitment methodology because it is considered one of the best options to recruit representative samples from hard-to-reach populations [ 16 ]. The clinical staff was instructed to distribute the study information leaflet to every eligible patient in the study period, explaining the implications of the study and inviting them to fill out the questionnaire. To limit recruitment bias and enhance sample representativeness, study nurses were briefed to avoid self-selected restrictions in recruitment and, if possible, to give a questionnaire to every incoming patient with hepatitis B who understood German sufficiently well. However, because of the administrative burden of the clinical staff, only 30.4% (285/939) of the estimated eligible incoming patients received the questionnaire ( Multimedia Appendix 1 ). Because this low distribution number results from administrative burden in the clinic, we do not expect this to have a relevant impact on representativeness (refer to the Limitations subsection under the Discussion section). Completed questionnaires (207/285, 72.6% of the distributed questionnaires; Multimedia Appendix 1 ) were collected in the recruiting hospital and sent to the authors via mail.
The dependent variable (acceptance of social media recruitment) was constructed based on the Technology Acceptance Model [ 17 , 18 ], involving the dimensions of perceived usefulness; perceived ease of use, intentions, and problem awareness; and proved good internal consistency (Cronbach α=0.863). Possible predictors for social media recruitment acceptance were identified based on the abovementioned hypotheses and operationalized by, if possible, existing validated questionnaires. For 3 (33%) of the 9 independent variables, we used existing validated questionnaires that were found to be of excellent reliability: the social media literacy scale (14 items, Cronbach α=0.947) [ 19 ], the Berger HIV Stigma Scale for use among patients with hepatitis C virus (6 items, Cronbach α=0.931) [ 20 ], and the Privacy Attitude Questionnaire [ 21 ]. For the latter, we included a shortened version that covered the dimensions developed in the Privacy Attitude Questionnaire but targeted it toward the hepatitis B context. From these dimensions, 2 subscales were created: secrecy of hepatitis B diagnosis (2 items, Cronbach α=0.623) and data privacy needs regarding hepatitis B diagnosis (2 items, Cronbach α=0.587).
For the remaining variables, no validated tools existed. Hence, we developed new scales for each variable of interest. As indicated by internal consistency, these were of moderate, good, or excellent reliability: general social media use (8 items, Cronbach α=0.676), hepatitis B–related social media use (6 items, Cronbach α=0.906), interest in clinical studies (2 items, Cronbach α=0.895), and trust in information sources regarding hepatitis B (11 items, Cronbach α=0.905; 2 subscales were created: trust in medical information sources—4 items, Cronbach α=0.784 and trust in nonmedical information sources, ie, traditional media, social media, other patients, poster advertisements, etc—7 items, Cronbach α=0.881). In addition to these adapted and self-developed scales, we included 4 demographic variables in the regression model (age, gender, education, and mother tongue as an indicator of migration background). A preliminary version of the questionnaire was discussed with 3 experts from the fields of infectiology and bioethics and then adapted and shortened based on their comments. We then performed cognitive pretesting [ 22 ] with 6 patients with hepatitis B, leading to minor changes. The full questionnaire is provided in Multimedia Appendix 2 .
Using SPSS (version 28.0; IBM Corp), we (1) performed descriptive analyses, (2) determined independent factors associated with participants’ acceptance of social media as a recruitment tool for clinical hepatitis B studies through multiple linear regression analysis, and (3) performed additional exploratory bivariate analyses of hepatitis B–related stigma (ie, correlation, independent 2-tailed t test). The statistical significance level was set at P <.05. For multiple linear regression analysis, assumption checks were performed before the interpretation of the model ( Multimedia Appendix 3 ). For the scale measuring the frequency of social media use, missing values were replaced by “0” (ie, “never”), assuming that participants did not tick a box, as they did not know the respective social media platform. Overall, 71.3% (139/195) of the participants completed all items, resulting in 3.66% (478/13,065) missing values and 81% (54/67) incomplete variables.
For the linear regression analysis, theoretical considerations and hypotheses derived from our previous qualitative study determined predictor selection. In addition, the sample size or predictor ratio a priori determines variable selection for regression modeling. According to Harrell [ 23 ], a fitted regression model is likely to be reliable when p<m/10 or p<m/20 (average requirement: p<m/15), where p is the number of predictors and m is the sample size. Applying this requirement to our sample size (N=195) and having missing data, we preliminarily limited the number of included predictors to 11. The following 11 predictors were included in the regression model: general social media use, social media literacy, hepatitis B–related social media use, interest in clinical studies, trust in medical information sources regarding hepatitis B (dichotomized to meet assumption of linearity), trust in nonmedical information sources regarding hepatitis B, secrecy of hepatitis B (dichotomized to meet assumption of linearity), data privacy needs regarding hepatitis B (dichotomized to meet assumption of linearity), perceived stigma, age, and education. Assumptions checks for regression analyses are presented in Multimedia Appendix 3 .
For study consent, participants were asked to confirm having read and understood the study information and to consent to the study participation by checking a consent box at the beginning of the questionnaire. Only questionnaires with this box checked were included in the analysis (12/207, 5.8% of the questionnaires were excluded for that reason; Multimedia Appendix 1 ). The ethics committees from the Technical University of Munich (12/22-S-NP), Hannover Medical School (10368_BO_K_2022), and University Clinic Leipzig (189/22-lk) approved the study.
After conducting an in-depth literature review on the ethical and social challenges surrounding social media recruitment for clinical studies [ 11 ], we developed 2 semistructured interview guides, one targeted at patients with hepatitis B and the other targeted at multidisciplinary experts. On the basis of interviews with 6 patients that were triangulated with findings from 30 interviews with experts, we qualitatively assessed what factors could be associated with the acceptance of social media recruitment for clinical hepatitis B studies. On the basis of these findings, we derived hypotheses to be tested quantitatively in a survey among patients with hepatitis B in Germany ( Textbox 1 ).
Most of the patients we talked with were rejecting the idea of being recruited for a clinical hepatitis B study via social media. However, patients who were more actively involved in their own recruitment tended to have more accepting attitudes. For example, patients who described using social media as a tool for informing themselves about potential clinical studies related to their disease were less opposed to being recruited via the same channel. One patient included search engines in their definition of social media and mentioned the following:
You can also advertise on Google. That is quasi/I think it’s better if I [as a patient] search for a study. For example, I search for a study related to psoriasis and enter that term in Google—when the advertisement for a psoriasis study is then made so that it shows up as the first suggestion...I think that’s better because in these instances I’m already searching, so I take the first step, I search for the study. And then the study, or the advertisement must be done in such a way that I can find it. So, I take the first step and then I land on the study. [Patient 3]
Similarly, patients who joined shared interest groups, such as patient groups on Facebook, which gather people who deliberately want to share their own experiences with the disease and learn from others’ experiences, were more open toward the idea of being approached and recruited within such groups.
These insights indicate that patients who were already active on social media and found it useful for their personal disease management were more open to being recruited via social media. This led us to the following hypothesis: (H1) The more patients use social media (for hepatitis B), the higher their acceptance of using social media as a recruitment tool for clinical (hepatitis B) studies.
The patients we interviewed represented a variety of levels regarding social media literacy. While some patients have had very limited contact with social media, others were very active on social media. One patient even described social media content management as part of their daily job. Another had conducted a research web-based questionnaire for which they were recruiting on the web. Analyzing the interviewees’ accounts about their experience with social media, and partially their use habits, we found a scattered connection to social media recruitment acceptance: those who were considered to have higher digital literacy skills were, in some instances, likely to accept social media as a recruitment tool for clinical hepatitis B studies because they perceived other forms of recruitment as outdated:
I think we are living in a time that you have to use social media because if you don’t use it...sending a letter or put[ting it] in the newspaper, will not help you. [Patient 6]
On the other end of the spectrum, however, patients with very low digital literacy skills and relatedly very little reported use of social media, or digital media in general, in some instances had difficulties delimiting the concept of social media as such. Presumably, their less nuanced understanding of social media as a concept makes them less strictly opposed to being recruited for a clinical study via social media. One patient, for example, favored personal contact for study recruitment at first but then revised their statement and reported that being helped was even more important than personal contact:
Yes definitely. If it was something important it would be best if we met at a clinic, or I don’t know where this study is being done.... But even via Facebook or Messenger.... Yeah, actually never mind, I don’t care actually. [Patient 2]
While the interviews suggested a connection between the acceptance of social media recruitment for clinical hepatitis B studies and digital literacy, it remained unclear whether acceptance was higher with high or low digital literacy. Consequently, we formulated the nondirectional hypothesis that (H2) digital literacy is associated with social media acceptance (SMA).
Some participating patients expressed particularly high interest in participating in clinical studies about hepatitis B. One patient explained to us that they were “very, very happy to support studies” (patient 5), and another patient told us the following: “I actually want to help. So, that’s why I get in” (patient 6). Patients like this, who reported an increased willingness to participate in clinical studies in general, seemed more susceptible to social media as a recruitment tool, too.
Another patient perceived it as beneficial that online recruitment made them less dependent on their physician to refer them to the study:
I don’t know if my physician is even internet-savvy, he’s a bit older. And well, then I thought, I have to see for myself because I’m not sure how competent he is with such things. What I mean is, it would be nicer if I...could google for [a clinical trial], land on a platform, search for [relevant studies], see all the information and can get in touch right away and say: “Hey, I am interested in your study. I would like to participate.” Because in my case, the...specialists didn’t even know that this [study] existed.... That’s stupid and got me pretty upset.” [Patient 3]
None of the patients interviewed reported that they were generally against participation in clinical studies. This is likely a recruitment bias of this qualitative interview study, which made it difficult to interrogate if patients who are less accepting of clinical studies are also less accepting of social media recruitment. Yet, based on the apparent influence of this aspect in 2 (33%) of 6 patient interviews, we formulated the following hypothesis: (H3) The more patients are interested in clinical studies, the more they accept social media as a recruitment tool for clinical hepatitis B studies.
The role of trust in health care professionals, social media platforms, and other recruitment channels was a very salient aspect of all interviews. Illustrating this, one participating patient with hepatitis B stated the following as a reason for being against social media recruitment:
I just feel such a distrust of social media. Any information I share there, I’m not completely comfortable with/It’s just not a safe way for me to share information. [Patient 4]
Other patients were more open to social media recruitment if they knew the source of the advertisement and assigned relevant expertise to them:
It would be okay for me [if someone would contact me on social media to ask whether I would like to meet for a clinical study, as long as] the person is qualified in that direction and is well versed in this expertise. [Patient 2]
[R]ecruiting is normally working if the person that suggests it is a person that you trust or you know. So because she was a person I knew from [redacted], then I clicked the link and I got in. Normally we know, of course, that social media is also a trap for many, I don’t know, viruses and this kind of thing. So you don’t open everything if you don’t trust the link.... If I would see it on, I don’t know, social media and as we know, because you have these cookies that you accept, then immediately, they know that you have something or you are looking for some article. Then this kind of things will pop up. Again, it’s all about trusting links. I’m not sure how much I will get in something that is suggesting from just because I click on a link. [Patient 6]
More implicitly, another patient emphasized that the clinical setting was the place for them to discuss things in the context of hepatitis B, not social media:
This channel through the [clinic in Germany]... I have a very good opinion of the hospital and I have always been well taken care of there. That is the only channel through which I would talk about my condition and about my/yes. [Patient 1]
We analyze the aspect of trust in a separate publication (Willem, T, et al, unpublished data, January 2024) in detail and hypothesize the following: (H4) The more patients trust information sources, the higher their acceptance of social media recruitment. The hypothesis was operationalized for trust in medical information sources (H4a) and trust in nonmedical information sources (H4b).
A particular concern of most patients we spoke with was their privacy. Privacy is a multifaceted and complex concept, and we found that participants referred to different dimensions of privacy: (1) data privacy, defined as the general attitude toward protective measures that empower patients or users to make their own decisions about who can process their data for which purpose; and (2) privacy related to the perceived secrecy of the hepatitis B diagnosis.
First, regarding data privacy, several patients perceived recruitment via social media as dubious and suspected some form of data leakage or malicious data collection goals behind the reach outs. This view applied irrespectively to how they would be approached on social media (eg, advertisement banners in their social media timelines or personal contact requests via social media messengers by health care professionals). For example, a patient who reported on being in the process of decreasing their social media use to protect their privacy also said that if someone contacted them on social media regarding clinical study participation, they would “find that very strange, because [I] would ask [my]self, where did they get this information?” and reported that they would feel that this “would rob quite a lot of privacy” (patient 5). Another patient, who reported using WhatsApp as their only social media, explained that by saying that they “consider social media to be useful in some instances;” however, they continued, “It’s too risky for me with my private data and so much advertising. This, for me, trumps all advantages of social media recruitment” (patient 4).
Regarding the second privacy dimension, secrecy, several patients commented on their hepatitis B diagnosis being a very private, intimate matter:
This condition is in my most private, intimate sphere…. And you might be right, I never thought about it in this way, but [my avoiding engaging on social media regarding hepatitis B] may be related to the fact that content I pass on via WhatsApp can be passed on thousands of times with one click. [Patient 1]
One patient replied to a question regarding their attitude toward being contacted by a study center via social media that they “would find that difficult”. As a reason, this patient explained the following:
[T]hat’s just the problem: it ends up on social media. See, if someone writes: “Hey, I would like to ask you about your hepatitis B, whether you would participate in a study?” Then this information is out there on social media.... That’s why I had a very, very good feeling when my doctor approached me about [this interview study] and that it just went through the clinic. If she had said, “Look, someone is approaching you via social media,” or something, then I would have said no, right? Because I wouldn’t have wanted to, because these data/social media make money because they have data. They run the ads based on your data and what you type in there or what you say or whatever. And I don’t want that associated with my disease. [Patient 5]
These findings led us to the following hypothesis: (H5) The more patients value privacy, the lower their acceptance of using social media as a recruitment tool for clinical hepatitis B studies. The hypothesis was operationalized for secrecy (H5a) and data privacy (H5b).
Several interviewed patients with hepatitis B reported fear of being stigmatized if their social environment found out about their diagnosis as an important reason against social media recruitment. One patient, who mentioned that only their closest family members knew about their diagnosis, expressed fear that other people learning the diagnosis would lead to social exclusion:
A broken leg or surgery on the knee or hip. This is apparent to everyone. And everyone assumes that it will heal at some point and that there is no potential infectious danger from these people. Whereas in the case of infectious diseases, no one can assess that, and people get socially excluded very quickly.... And this is why I am so cautious with my data. [Patient 1]
A similar view was shared by patient 5. Another patient added that perception of stigma differed depending on the context:
I come from [Eastern European country], I have moved to Germany. So here the mentality is a little bit different. If you say to someone, I have Hepatitis, he is okay with it. He says: “Oh, is not a problem. Normally here we are vaccinated against it.” If you are going to [Eastern European country] and say: “I have Hepatitis B,” it’s like you have a huge disease that can just be taken by a handshake [laughs]. And so I think that’s why I’m going on the conservative site. [Patient 6]
The connection between the stigma connected to hepatitis B and the social media–connected perceived privacy risks established by several interview participants led us to the following hypothesis: (H6) The higher the perceived stigma of patients, the lower their acceptance of social media as a recruitment tool for clinical hepatitis B studies.
Participant characteristics.
A total number of 195 eligible questionnaires were included in the statistical analysis of the survey study. Table 1 displays the characteristics of the patients with hepatitis B who participated in the study: more than half of the participants (108/195, 55.4%) were aged between 30 and 49 years. Just above half (110/195, 56.4%) reported having lower educational degrees than Abitur (German equivalent to a high school degree). More than half of the participants (111/195, 56.9%) had another mother tongue than German (only). All participants had a chronic hepatitis B infection, as per the inclusion criterion of this study.
Characteristics | Participants, n (%) | ||
Male | 101 (51.8) | ||
Female | 88 (45.1) | ||
No answer | 6 (3.1) | ||
18-29 | 16 (8.2) | ||
30-39 | 50 (25.6) | ||
40-49 | 58 (29.7) | ||
50-59 | 38 (19.5) | ||
>60 | 24 (12.3) | ||
No answer | 9 (4.6) | ||
Yes | 71 (36.4) | ||
No | 110 (56.4) | ||
No answer | 14 (7.2) | ||
German | 101 (51.8) | ||
Other | 111 (56.9) | ||
No answer | 12 (6.2) |
The questionnaire included 7 scales that were measured through several items ( Table 2 and Multimedia Appendices 1 and 4 ).
The level of acceptance for social media recruitment was measured through the SMA scale, which was calculated based on 4 questionnaire items (P6.01 to P6.04; Multimedia Appendix 4 ). Each item was measured by a 5-point Likert scale, ranging from 0 (completely disagree) to 4 (completely agree). Items P6.01 (“Social media are well suited to make patients aware of studies on new hepatitis B treatments”) and P6.02 (“Social media increase the likelihood of success in hepatitis B clinical trials”) formed the subscale of the perceived usefulness of social media recruitment and received moderate agreement (P6.01: mean 1.99, SD 1.23; P6.02: mean 1.81, SD 1.12). Items P6.03 and P6.04 formed the SMA subscale on the perceived usefulness of social media recruitment. Item P6.03 (“I would be recruited via social media for a hepatitis B clinical trial”) received particularly low acceptance (mean 1.13, SD 1.13; Multimedia Appendix 4 ). P6.04 (I would use social media to learn about hepatitis B clinical trials) received a higher mean acceptance score than P6.03 (mean 1.58, SD 1.23; Multimedia Appendix 4 ).
The overall SMA score was calculated by summarizing the scores from items 6.01 to 6.04 and ranged from 0 (no acceptance) to 16 (full acceptance; mean 6.48, SD 3.03; Table 2 ). While 28.7% (56/195) of the respondents rejected social media recruitment with an SMA score of <5, only 10.2% (20/195) of the respondents accepted social media recruitment with an SMA score of >11 ( Table 3 ).
Valid, n (%) | Items, n (%) | Scale, median (range ) | Values, mean (SD) | |
General social media use | 195 (100) | 8 (15) | 11 (0-32) | 11.22 (6.51) |
Social media literacy (hypothesis 2) | 174 (89.2) | 14 (25) | 41 (0-56) | 37.58 (14.60) |
Hepatitis B–related social media use (hypothesis 1) | 181 (92.8) | 6 (11) | 3 (0-24) | 5.22 (5.61) |
Interest in clinical studies (hypothesis 3) | 187 (95.9) | 2 (4) | 6 (0-8) | 5.53 (2.45) |
Trust in medical information sources | 180 (92.3) | 4 (7) | 11 (0-16) | 10.27 (3.64) |
Trust in nonmedical information sources (hypothesis 4) | 175 (89.7) | 7 (13) | 8.5 (0-28) | 8.36 (5,76) |
Acceptance of social media recruitment (dependent variable) | 178 (91.3) | 4 (7) | 6 (0-16) | 6.48 (3.93) |
Secrecy (hypothesis 5a) | 185 (94.9) | 2 (4) | 2 (0-8) | 2.25 (2.09) |
Data privacy (hypothesis 5b) | 186 (95.4) | 2 (4) | 7 (0-8) | 6.25 (2.10) |
Perceived stigma (hypothesis 6) | 180 (92.3) | 6 (11) | 3.5 (0-24) | 5.52 (6.02) |
a Items were measured through a 5-point Likert scale, ranging from 0 (completely disagree) to 4 (completely agree).
Social media acceptance score | Responses, n (%) |
0 | 20 (10.3) |
1 | 4 (2.1) |
2 | 6 (3.1) |
3 | 8 (4.1) |
4 | 18 (9.2) |
5 | 14 (7.2) |
6 | 20 (10.3) |
7 | 20 (10.3) |
8 | 17 (8.7) |
9 | 12 (6.2) |
10 | 8 (4.1) |
11 | 11 (5.6) |
12 | 7 (3.6) |
13 | 7 (3.6) |
14 | 2 (1) |
15 | 1 (0.5) |
16 | 3 (1.5) |
Missing | 17 (8.7) |
Using multiple linear regression analyses, we evaluated the predictors of participants’ acceptance of social media as a recruitment tool for clinical hepatitis B studies. Testing the statistical significance of the overall model fit, the F test indicated that the predictors included in the model substantially contributed to the explanation of the dependent variable ( Table 4 ). Regression analysis revealed that social media use for hepatitis B, interest in clinical studies, trust in nonmedical information sources, and hepatitis B secrecy independently predicted acceptance of social media as a recruitment tool for clinical hepatitis B studies. More precisely, the higher the social media use for hepatitis B, the higher the interest in clinical studies, the more trust in nonmedical information sources, and the less secret hepatitis B, the higher the acceptance of social media as a recruitment tool for clinical hepatitis B studies ( Table 4 ).
Unstandardized coefficients B (SE) | β | test ( ) | value | Tolerance | VIF | |
Constant | 4.007 (1.935) | — | 2.071 (127) | .04 | — | — |
General social media use | 0.060 (0.051) | .098 | 1.175 (127) | .24 | .628 | 1.593 |
Social media literacy | –0.002 (0.025) | –.008 | –0.096 (127) | .92 | .600 | 1.668 |
Hepatitis B–related social media use | 0.279 (0.053) | .391 | 5.299 (127) | <.001 | .804 | 1.234 |
Interest clinical studies | 0.283 (0.127) | .171 | 2.217 (127) | .03 | .732 | 1.366 |
Trust medical information sources | –0.601 (0.683) | –.079 | –0.879 (127) | .38 | .546 | 1.830 |
Trust in nonmedical information sources | 0.252 (0.058) | .359 | 4.307 (127) | <.001 | .632 | 1.583 |
Secrecy | –1.299 (0.542) | –.171 | –2.399 (127) | .02 | .861 | 1.161 |
Data privacy | –0.765 (0.577) | –.099 | –1.326 (127) | .19 | .792 | 1.262 |
Perceived stigma | –0.003 (0.048) | –.004 | –0.057 (127) | .95 | .770 | 1.299 |
Age | –0.052 (0.028) | –.151 | –1.842 (127) | .07 | .648 | 1.543 |
Education | 0.770 (0.567) | .102 | 1.357 (127) | .18 | .782 | 1.278 |
a Overall model fit: F 11,127 =9.221, P <.001; R 2 =0.444; N=139.
b VIF: variance inflation factor.
c Not applicable.
We present the first empirical study investigating how adult patients with hepatitis B accept social media recruitment for clinical studies. Social media have been suggested to increase recruitment accrual, particularly for hard-to-reach populations [ 13 , 14 , 24 ]. Our study provides a more fine-grained contextualization of this potential. We find that acceptance of social media recruitment among patients with hepatitis B is associated with higher ongoing activity on social media with regard to hepatitis B (confirming H1), a generally high interest in participating in clinical studies for hepatitis B (confirming H3), and high trust recruitment channels outside the clinical setting (confirming H4a). Patients with these characteristics are, consequently, recruitable via social media under the assumptions that (1) patients are most effectively recruited via social media if they accept this channel as a recruitment method and (2) people who do not accept this recruitment channel should also not be recruited in this way.
Yet, 54 (27.7%) out of 195 participants reported an acceptance score of <5 and, thus, rejected being recruited via social media. Moreover, only 20 (10.3%) out of 195 participants reported an acceptance score >11, indicating high acceptance. These findings indicate that recruitment success via social media might be limited among patients with hepatitis B in Germany and underline the importance of using multiple recruitment channels to facilitate diversity and equitable health care access, particularly for patient groups considered vulnerable [ 11 ].
Contrary to what we had hypothesized, SMA was not associated with digital literacy (rejecting H2), data privacy needs (rejecting H5b), and perceived hepatitis B–related stigma (rejecting H6), although reported secrecy around hepatitis B diagnosis was a predictor (confirming H5a). Moreover, trust in medical information sources and demographic variables (age and education) as well as the overall frequency of using social media were not associated with SMA. The results for H2 and H4b are not surprising, as the preceding qualitative interviews did not explicitly indicate a linear connection between digital literacy and social media recruitment acceptance. Our study cannot exclude the possibility that there might be a potential nonlinear association, but another survey study also found that digital literacy did not directly affect the intention to use digital technology [ 25 ]. Furthermore, trust is a multifaceted concept [ 26 , 27 ], which is why the subjects of trust were split into medical information sources and other advertisement channels. Hence, it is not unexpected that trust in medical information sources is not associated with SMA.
The rejection of H5b (data privacy) was more surprising, particularly because the qualitative interviews indicated strong connections between data privacy and SMA. In addition, the scholarly debate around data privacy issues has been very salient: data ethicists have repeatedly emphasized the issues related to data privacy and transparency in the context of social media use in the research context [ 12 , 28 , 29 ]. In addition, the European General Data Protection Regulation emphasizes the transparent use of data and the rights of data subjects [ 30 ]. Moreover, various scandals (eg, related to the US presidential election in 2016 and the UK Brexit referendum) diminished users’ trust in social media platforms and increased awareness of data privacy in that context [ 31 , 32 ]. A recent population survey conducted in Germany, the United Kingdom, and the United States confirmed high levels of concern regarding data privacy in all included countries [ 33 ]. Given these public discussions about social media activities being problematic for data privacy, it is particularly astonishing that data privacy concerns (as operationalized in our study) were not predicting SMA. The findings align with discussions around the privacy paradox. It was confirmed in numerous studies that social media users display limited data protection behavior despite being concerned about their privacy [ 34 - 36 ]. In line with this, the aforementioned scandals have not resulted in a decline in Facebook users [ 37 , 38 ]. Other studies suggest a poor user awareness of online privacy [ 39 ] and fatigue in engaging with privacy-related risks [ 40 ]. It seems that the surveyed population with hepatitis B in Germany are also affected by this privacy paradox.
The rejection of H6 (association of stigma) was surprising, too, particularly because of the strong association between hepatitis B and stigma in other studies. An Indian survey study found that most surveyed patients with hepatitis B were subject to severe stigma and moderate to severe discrimination, with gender identification as men, unemployment, and illiteracy being predictors of discrimination [ 6 ]. Other survey studies from Australia, Turkey, and Serbia confirmed the presence of self-reported perception of stigma in 35% to 47% of patients with hepatitis B and 60% to 65% of patients with hepatitis C [ 10 , 41 , 42 ]. An Iranian qualitative study found that patients with hepatitis B conceptualized stigma as both extrinsic (eg, discrimination, public embarrassment, or blame) and intrinsic (eg, perceived rejection, social isolation, and frustration) [ 8 ]. Although this empirical evidence illustrates the relative importance of stigma in the context of hepatitis B, this did not predict patients’ acceptance of social media recruitment in our study. Instead, our findings suggest that the perceived secrecy of a hepatitis B diagnosis, which seems to be unrelated to the perception of stigma, is informative on social media recruitment acceptance. This indicates that perceptions of stigma in other stigmatized diseases (eg, sexually transmitted diseases, and psychiatric disorders) might not influence patient acceptance to be recruited via social media for clinical studies. However, empirical studies within these populations need to confirm this.
Our survey showed a relatively balanced representation of genders. This aligns with a German serological study from 2011, which indicated no statistically significant difference in the prevalence of acute or chronic hepatitis B infection in men and women [ 43 ]. In terms of age distribution, the survey study covered a diverse range of age groups, mirroring the distribution found in the German serological study [ 43 ]. On the basis of these observations, the survey sample overall is representative of the population with hepatitis B in Germany regarding gender and age.
However, it is essential to consider potential limitations and sources of bias. The recruitment strategy used, primarily relying on venue-based recruitment within a clinical setting, might introduce selection bias, as it may not fully capture the diverse population that may exist outside such settings. In addition, only 30.4% (285/939) of estimated incoming patients received the questionnaire, which might introduce an additional selection bias. We attempted to mitigate this by explicitly briefing the study nurses to avoid self-selection when distributing the survey. The low distribution rate has been mainly caused by administrative burden, resulting in weeks during which no questionnaires were distributed. Thus, we do not expect this to have a large impact on selection bias.
In addition, the study’s restriction to the German language may have impaired the accessibility of the questionnaire for participants who do not have German as their mother tongue. In addition, the exclusive focus on a German setting may limit the generalizability of the findings to a broader international context, potentially impacting the study’s external validity. Finally, it is important to note that we have shortened the questionnaire in comparison to its original length after discussion with clinical colleagues, who provided the feedback that the questionnaire was too long. As part of this shortening, some validated scales were replaced by self-developed scales, which may have implications for the comprehensiveness and depth of the data collected.
Consequently, the attitudes of patients in other medical conditions toward social media recruitment, and a comparison to the attitudes of patients with hepatitis B assessed in this study, should be subject to further research. Similarly, it will be important to study how the different social media platforms, their underlying logic, use patterns, and other factors might influence patients’ acceptance of social media recruitment over time.
This study provides the first quantitative data on the acceptance of social media as a recruitment channel for clinical studies. In the context of hepatitis B in Germany, acceptance of being recruited via social media was very limited. More than 1 (28.7%) in 4 participants rejected this recruitment channel. The study sets out to be a reference point for future studies assessing the attitudes and acceptance of social media recruitment for clinical studies. Such empirical inquiries can facilitate the work of researchers designing clinical studies as well as ethics review boards in balancing the risks and benefits of social media recruitment in a context-specific manner. Moreover, this study provides guidance for researchers considering using social media recruitment and ethics review boards judging such undertakings, by cautioning against the potentially low acceptance rates social media–based recruitment might yield for some patient populations. These should be weighed against the risks of social media recruitment for the target populations.
Similarly relevant for practice, the findings indicate that social media recruitment is particularly accepted in patient populations with high interest in participating in clinical studies. This is particularly the case for diseases with insufficient treatment options and historically neglected diseases with high unmet needs [ 44 ]. Using social media as a recruitment channel for studies targeting these patient groups might thus encounter higher acceptance levels than in this study. There was no statistically significant role associated with perceived stigma and data privacy needs among patients, suggesting that these concerns are unrelated to social media recruitment acceptance.
This study received funding from the European Union’s Horizon 2020 research and innovation program (848223; TherVacB). This publication reflects only the authors’ views, and the European Commission is not liable for any use that may be made of the information contained therein. The authors would like to thank all TherVacB clinical project partners who helped recruit for this study and provided feedback on the questionnaire for their kind collaboration. The authors would also like to thank all patients with hepatitis B who took the time to participate in the survey.
None declared.
Response rate information.
Questionnaire.
Assumptions checks for regression analyses.
Description of each item of the questionnaire.
social media acceptance |
Edited by A Mavragani; submitted 27.10.23; peer-reviewed by D Kukadiya, WB Lee; comments to author 26.02.24; revised version received 08.03.24; accepted 03.06.24; published 26.08.24.
©Theresa Willem, Bettina M Zimmermann, Nina Matthes, Michael Rost, Alena Buyx. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 26.08.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on June 12, 2020 by Pritha Bhandari . Revised on June 22, 2023.
Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations.
Quantitative research is the opposite of qualitative research , which involves collecting and analyzing non-numerical data (e.g., text, video, or audio).
Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc.
Quantitative research methods, quantitative data analysis, advantages of quantitative research, disadvantages of quantitative research, other interesting articles, frequently asked questions about quantitative research.
You can use quantitative research methods for descriptive, correlational or experimental research.
Correlational and experimental research can both be used to formally test hypotheses , or predictions, using statistics. The results may be generalized to broader populations based on the sampling method used.
To collect quantitative data, you will often need to use operational definitions that translate abstract concepts (e.g., mood) into observable and quantifiable measures (e.g., self-ratings of feelings and energy levels).
Research method | How to use | Example |
---|---|---|
Control or manipulate an to measure its effect on a dependent variable. | To test whether an intervention can reduce procrastination in college students, you give equal-sized groups either a procrastination intervention or a comparable task. You compare self-ratings of procrastination behaviors between the groups after the intervention. | |
Ask questions of a group of people in-person, over-the-phone or online. | You distribute with rating scales to first-year international college students to investigate their experiences of culture shock. | |
(Systematic) observation | Identify a behavior or occurrence of interest and monitor it in its natural setting. | To study college classroom participation, you sit in on classes to observe them, counting and recording the prevalence of active and passive behaviors by students from different backgrounds. |
Secondary research | Collect data that has been gathered for other purposes e.g., national surveys or historical records. | To assess whether attitudes towards climate change have changed since the 1980s, you collect relevant questionnaire data from widely available . |
Note that quantitative research is at risk for certain research biases , including information bias , omitted variable bias , sampling bias , or selection bias . Be sure that you’re aware of potential biases as you collect and analyze your data to prevent them from impacting your work too much.
Once data is collected, you may need to process it before it can be analyzed. For example, survey and test data may need to be transformed from words to numbers. Then, you can use statistical analysis to answer your research questions .
Descriptive statistics will give you a summary of your data and include measures of averages and variability. You can also use graphs, scatter plots and frequency tables to visualize your data and check for any trends or outliers.
Using inferential statistics , you can make predictions or generalizations based on your data. You can test your hypothesis or use your sample data to estimate the population parameter .
First, you use descriptive statistics to get a summary of the data. You find the mean (average) and the mode (most frequent rating) of procrastination of the two groups, and plot the data to see if there are any outliers.
You can also assess the reliability and validity of your data collection methods to indicate how consistently and accurately your methods actually measured what you wanted them to.
Quantitative research is often used to standardize data collection and generalize findings . Strengths of this approach include:
Repeating the study is possible because of standardized data collection protocols and tangible definitions of abstract concepts.
The study can be reproduced in other cultural settings, times or with different groups of participants. Results can be compared statistically.
Data from large samples can be processed and analyzed using reliable and consistent procedures through quantitative data analysis.
Using formalized and established hypothesis testing procedures means that you have to carefully consider and report your research variables, predictions, data collection and testing methods before coming to a conclusion.
Despite the benefits of quantitative research, it is sometimes inadequate in explaining complex research topics. Its limitations include:
Using precise and restrictive operational definitions may inadequately represent complex concepts. For example, the concept of mood may be represented with just a number in quantitative research, but explained with elaboration in qualitative research.
Predetermined variables and measurement procedures can mean that you ignore other relevant observations.
Despite standardized procedures, structural biases can still affect quantitative research. Missing data , imprecise measurements or inappropriate sampling methods are biases that can lead to the wrong conclusions.
Quantitative research often uses unnatural settings like laboratories or fails to consider historical and cultural contexts that may affect data collection and results.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.
Operationalization means turning abstract conceptual ideas into measurable observations.
For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.
Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.
Reliability and validity are both about how well a method measures something:
If you are doing experimental research, you also have to consider the internal and external validity of your experiment.
Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Bhandari, P. (2023, June 22). What Is Quantitative Research? | Definition, Uses & Methods. Scribbr. Retrieved August 27, 2024, from https://www.scribbr.com/methodology/quantitative-research/
Other students also liked, descriptive statistics | definitions, types, examples, inferential statistics | an easy introduction & examples, get unlimited documents corrected.
✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts
IMAGES
COMMENTS
Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes.2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed ...
Generally, in quantitative studies, reviewers expect hypotheses rather than research questions. However, both research questions and hypotheses serve different purposes and can be beneficial when used together. ... Example: "Teaching method A will improve student performance more than method B." Explanation: This hypothesis compares the ...
Hypotheses are the testable statements linked to your research question. Hypotheses bridge the gap from the general question you intend to investigate (i.e., the research question) to concise statements of what you hypothesize the connection between your variables to be. For example, if we were studying the influence of mentoring relationships ...
5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.
Quantitative method is used to summarize, average, find patterns, make predictions, and test causal associations as well as generalizing results to wider populations. It allows us to quantify effect sizes, determine the strength of associations, rank priorities, and weigh the strength of evidence of effectiveness. ... Hypothesis: We should keep ...
Quantitative Methods Part One: Planning Your Study. Planning your study is one of the most important steps in the research process when doing quantitative research. As seen in the diagram below, it involves choosing a topic, writing research questions/hypotheses, and designing your study. ... An example of a hypothesis found in a communication ...
The p-value of a hypothesis test is the probability that your sample data would have occurred if you hypothesis were not correct. Traditionally, researchers have used a p-value of 0.05 (a 5% probability that your sample data would have occurred if your hypothesis was wrong) as the threshold for declaring that a hypothesis is true.
Hypothesis Testing. When you conduct a piece of quantitative research, you are inevitably attempting to answer a research question or hypothesis that you have set. One method of evaluating this research question is via a process called hypothesis testing, which is sometimes also referred to as significance testing. Since there are many facets ...
Step 5: Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if … then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.
Researchers use surveys, experiments, and other quantitative methods to collect data that can inform marketing strategies, product development, and pricing decisions. ... To test a hypothesis: Quantitative research is often used to test a hypothesis or a theory. It involves collecting numerical data and using statistical analysis to determine ...
Present the findings in your results and discussion section. Though the specific details might vary, the procedure you will use when testing a hypothesis will always follow some version of these steps. Table of contents. Step 1: State your null and alternate hypothesis. Step 2: Collect data. Step 3: Perform a statistical test.
A hypothesis is central to the scientific method. But what is a hypothesis? A hypothesis is a testable statement that proposes a possible explanation to a phenomenon, and it may include a prediction. ... although they are more commonly associated with quantitative research. In qualitative research, hypotheses may be formulated as tentative or ...
Quantitative research is used to validate or test a hypothesis through the collection and analysis of data. (Image by Freepik) If you're wondering what is quantitative research and whether this methodology works for your research study, you're not alone. If you want a simple quantitative research definition, then it's enough to say that this is a method undertaken by researchers based on ...
9. Hypothesis Testing. In this chaper we'll start to use the central limit theorem to its full potential. Let's quickly remind ourselves. The central limit theorem states that for any population, the means of repeatedly taken samples will approximate the population mean. Because of that, we could tell a bus of lost individuals was very very ...
All in all, there are 2 most common types of hypothesis testing methods. They are as follows - Frequentist Hypothesis Testing . The frequentist hypothesis or the traditional approach to hypothesis testing is a hypothesis testing method that aims on making assumptions by considering current data.
When conducting quantitative research, scientific researchers should describe an existing theory, generate a hypothesis from the theory, test their hypothesis in novel research, and re-evaluate the theory. Thereafter, they should take a deductive approach in writing the testing of the established theory based on experiments.
Quantitative methods emphasize objective measurements and the statistical, mathematical, or numerical analysis of data collected through polls, questionnaires, and surveys, or by manipulating pre-existing statistical data using computational techniques.Quantitative research focuses on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.
An Overview of Quantitative Research Method s IJMRA, Volume 06 Issue 08 August 2023 www.ijmra.in Page 3803 As the report is combined for different audiences, it d iffers in length and format and ...
Quantitative research methods are concerned with the planning, design, and implementation of strategies to collect and analyze data. Descartes, the seventeenth-century philosopher, suggested that how the results are achieved is often more important than the results themselves, as the journey taken along the research path is a journey of discovery. . High-quality quantitative research is ...
If you want to measure something or test a hypothesis, use quantitative methods. If you want to explore ideas, thoughts and meanings, use qualitative methods. If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
The methods for analyzing quantitative data are descriptive and inferential statistics. Choosing the right analysis method depends on the type of data collected and the specific research questions or hypotheses. ... The specific research questions you want to answer, and your hypothesis (if you have one) impact the analysis method you choose ...
If you want to measure something or test a hypothesis, use quantitative methods. If you want to explore ideas, thoughts and meanings, use qualitative methods. If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
Revised on 10 October 2022. Quantitative research is the process of collecting and analysing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalise results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and ...
Background: Social media platforms are increasingly used to recruit patients for clinical studies. Yet, patients' attitudes regarding social media recruitment are underexplored. Objective: This mixed methods study aims to assess predictors of the acceptance of social media recruitment among patients with hepatitis B, a patient population that is considered particularly vulnerable in this ...
Revised on June 22, 2023. Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and analyzing ...