Adult learning online education:
Adult learning online education:
Adult learning online education:
About the example: Boolean searches were conducted on November 4, 2019; result numbers may vary at a later date. No additional database limiters were set to further narrow search returns.
Database strategies for targeted search results.
Most databases include limiters, or additional parameters, you may use to strategically focus search results. EBSCO databases, such as Education Research Complete & Academic Search Complete provide options to:
Keep in mind that these tools are defined as limiters for a reason; adding them to a search will limit the number of results returned. This can be a double-edged sword. How?
Use limiters with care. When starting a search, consider opting out of limiters until the initial literature screening is complete. The second or third time through your research may be the ideal time to focus on specific time periods or material (scholarly vs newspaper).
Expanding your search term at the root.
Truncating is often referred to as 'wildcard' searching. Databases may have their own specific wildcard elements however, the most commonly used are the asterisk (*) or question mark (?). When used within your search. they will expand returned results.
Using the asterisk wildcard will return varied spellings of the truncated word. In the following example, the search term education was truncated after the letter "t."
Original Search | |
adult education | adult educat* |
Results included: educate, education, educator, educators'/educators, educating, & educational |
Explore these database help pages for additional information on crafting search terms.
Tips for saving research directly to Google drive.
It is possible to save articles (PDF and HTML) and abstracts in EBSCOhost databases directly to Google drive. Select the Google Drive icon, authenticate using a Google account, and an EBSCO folder will be created in your account. This is a great option for managing your research. If documenting your research in a Google Doc, consider linking the information to actual articles saved in drive.
EBSCOHost Databases & Google Drive: Managing your Research
This video features an overview of how to use Google Drive with EBSCO databases to help manage your research. It presents information for connecting an active Google account to EBSCO and steps needed to provide permission for EBSCO to manage a folder in Drive.
About the Video: Closed captioning is available, select CC from the video menu. If you need to review a specific area on the video, view on YouTube and expand the video description for access to topic time stamps. A video transcript is provided below.
What is a literature review.
A definition from the Online Dictionary for Library and Information Sciences .
A literature review is "a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works" (Reitz, 2014).
A systemic review is "a literature review focused on a specific research question, which uses explicit methods to minimize bias in the identification, appraisal, selection, and synthesis of all the high-quality evidence pertinent to the question" (Reitz, 2014).
EBSCO Connect [Discovery and Search]. (2022). Searching with boolean operators. Retrieved May, 3, 2022 from https://connect.ebsco.com/s/?language=en_US
EBSCO Connect [Discover and Search]. (2022). Searching with wildcards and truncation symbols. Retrieved May 3, 2022; https://connect.ebsco.com/s/?language=en_US
Machi, L.A. & McEvoy, B.T. (2009). The literature review . Thousand Oaks, CA: Corwin Press:
Reitz, J.M. (2014). Online dictionary for library and information science. ABC-CLIO, Libraries Unlimited . Retrieved from https://www.abc-clio.com/ODLIS/odlis_A.aspx
Ridley, D. (2008). The literature review: A step-by-step guide for students . Thousand Oaks, CA: Sage Publications, Inc.
Schedule an appointment.
Contact a librarian directly (email), or submit a request form. If you have worked with someone before, you can request them on the form.
5258 Accesses
4 Citations
Quantitative research methods are concerned with the planning, design, and implementation of strategies to collect and analyze data. Descartes, the seventeenth-century philosopher, suggested that how the results are achieved is often more important than the results themselves, as the journey taken along the research path is a journey of discovery. High-quality quantitative research is characterized by the attention given to the methods and the reliability of the tools used to collect the data. The ability to critique research in a systematic way is an essential component of a health professional’s role in order to deliver high quality, evidence-based healthcare. This chapter is intended to provide a simple overview of the way new researchers and health practitioners can understand and employ quantitative methods. The chapter offers practical, realistic guidance in a learner-friendly way and uses a logical sequence to understand the process of hypothesis development, study design, data collection and handling, and finally data analysis and interpretation.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Tax calculation will be finalised at checkout
Purchases are for personal use only
Institutional subscriptions
Babbie ER. The practice of social research. 14th ed. Belmont: Wadsworth Cengage; 2016.
Google Scholar
Descartes. Cited in Halverston, W. (1976). In: A concise introduction to philosophy, 3rd ed. New York: Random House; 1637.
Doll R, Hill AB. The mortality of doctors in relation to their smoking habits. BMJ. 1954;328(7455):1529–33. https://doi.org/10.1136/bmj.328.7455.1529 .
Article Google Scholar
Liamputtong P. Research methods in health: foundations for evidence-based practice. 3rd ed. Melbourne: Oxford University Press; 2017.
McNabb DE. Research methods in public administration and nonprofit management: quantitative and qualitative approaches. 2nd ed. New York: Armonk; 2007.
Merriam-Webster. Dictionary. http://www.merriam-webster.com . Accessed 20th December 2017.
Olesen Larsen P, von Ins M. The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index. Scientometrics. 2010;84(3):575–603.
Pannucci CJ, Wilkins EG. Identifying and avoiding bias in research. Plast Reconstr Surg. 2010;126(2):619–25. https://doi.org/10.1097/PRS.0b013e3181de24bc .
Petrie A, Sabin C. Medical statistics at a glance. 2nd ed. London: Blackwell Publishing; 2005.
Portney LG, Watkins MP. Foundations of clinical research: applications to practice. 3rd ed. New Jersey: Pearson Publishing; 2009.
Sheehan J. Aspects of research methodology. Nurse Educ Today. 1986;6:193–203.
Wilson LA, Black DA. Health, science research and research methods. Sydney: McGraw Hill; 2013.
Download references
Authors and affiliations.
School of Science and Health, Western Sydney University, Penrith, NSW, Australia
Leigh A. Wilson
Faculty of Health Science, Discipline of Behavioural and Social Sciences in Health, University of Sydney, Lidcombe, NSW, Australia
You can also search for this author in PubMed Google Scholar
Correspondence to Leigh A. Wilson .
Editors and affiliations.
Pranee Liamputtong
Reprints and permissions
© 2019 Springer Nature Singapore Pte Ltd.
Cite this entry.
Wilson, L.A. (2019). Quantitative Research. In: Liamputtong, P. (eds) Handbook of Research Methods in Health Social Sciences. Springer, Singapore. https://doi.org/10.1007/978-981-10-5251-4_54
DOI : https://doi.org/10.1007/978-981-10-5251-4_54
Published : 13 January 2019
Publisher Name : Springer, Singapore
Print ISBN : 978-981-10-5250-7
Online ISBN : 978-981-10-5251-4
eBook Packages : Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Policies and ethics
Run a free plagiarism check in 10 minutes, automatically generate references for free.
Published on 4 April 2022 by Pritha Bhandari . Revised on 10 October 2022.
Quantitative research is the process of collecting and analysing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalise results to wider populations.
Quantitative research is the opposite of qualitative research , which involves collecting and analysing non-numerical data (e.g. text, video, or audio).
Quantitative research is widely used in the natural and social sciences: biology, chemistry, psychology, economics, sociology, marketing, etc.
Quantitative research methods, quantitative data analysis, advantages of quantitative research, disadvantages of quantitative research, frequently asked questions about quantitative research.
You can use quantitative research methods for descriptive, correlational or experimental research.
Correlational and experimental research can both be used to formally test hypotheses , or predictions, using statistics. The results may be generalised to broader populations based on the sampling method used.
To collect quantitative data, you will often need to use operational definitions that translate abstract concepts (e.g., mood) into observable and quantifiable measures (e.g., self-ratings of feelings and energy levels).
Research method | How to use | Example |
---|---|---|
Control or manipulate an to measure its effect on a dependent variable. | To test whether an intervention can reduce procrastination in college students, you give equal-sized groups either a procrastination intervention or a comparable task. You compare self-ratings of procrastination behaviors between the groups after the intervention. | |
Ask questions of a group of people in-person, over-the-phone or online. | You distribute with rating scales to first-year international college students to investigate their experiences of culture shock. | |
(Systematic) observation | Identify a behavior or occurrence of interest and monitor it in its natural setting. | To study college classroom participation, you sit in on classes to observe them, counting and recording the prevalence of active and passive behaviors by students from different backgrounds. |
Secondary research | Collect data that has been gathered for other purposes e.g., national surveys or historical records. | To assess whether attitudes towards climate change have changed since the 1980s, you collect relevant questionnaire data from widely available . |
Once data is collected, you may need to process it before it can be analysed. For example, survey and test data may need to be transformed from words to numbers. Then, you can use statistical analysis to answer your research questions .
Descriptive statistics will give you a summary of your data and include measures of averages and variability. You can also use graphs, scatter plots and frequency tables to visualise your data and check for any trends or outliers.
Using inferential statistics , you can make predictions or generalisations based on your data. You can test your hypothesis or use your sample data to estimate the population parameter .
You can also assess the reliability and validity of your data collection methods to indicate how consistently and accurately your methods actually measured what you wanted them to.
Quantitative research is often used to standardise data collection and generalise findings . Strengths of this approach include:
Repeating the study is possible because of standardised data collection protocols and tangible definitions of abstract concepts.
The study can be reproduced in other cultural settings, times or with different groups of participants. Results can be compared statistically.
Data from large samples can be processed and analysed using reliable and consistent procedures through quantitative data analysis.
Using formalised and established hypothesis testing procedures means that you have to carefully consider and report your research variables, predictions, data collection and testing methods before coming to a conclusion.
Despite the benefits of quantitative research, it is sometimes inadequate in explaining complex research topics. Its limitations include:
Using precise and restrictive operational definitions may inadequately represent complex concepts. For example, the concept of mood may be represented with just a number in quantitative research, but explained with elaboration in qualitative research.
Predetermined variables and measurement procedures can mean that you ignore other relevant observations.
Despite standardised procedures, structural biases can still affect quantitative research. Missing data , imprecise measurements or inappropriate sampling methods are biases that can lead to the wrong conclusions.
Quantitative research often uses unnatural settings like laboratories or fails to consider historical and cultural contexts that may affect data collection and results.
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.
Operationalisation means turning abstract conceptual ideas into measurable observations.
For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.
Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.
Reliability and validity are both about how well a method measures something:
If you are doing experimental research , you also have to consider the internal and external validity of your experiment.
Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.
If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.
Bhandari, P. (2022, October 10). What Is Quantitative Research? | Definition & Methods. Scribbr. Retrieved 9 September 2024, from https://www.scribbr.co.uk/research-methods/introduction-to-quantitative-research/
Researchers using qualitative methods tend to:
Image from https://www.editage.com/insights/qualitative-quantitative-or-mixed-methods-a-quick-guide-to-choose-the-right-design-for-your-research?refer-type=infographics
Qualitative Research: an operational description
Purpose : explain; gain insight and understanding of phenomena through intensive collection and study of narrative data
Approach: inductive; value-laden/subjective; holistic, process-oriented
Hypotheses: tentative, evolving; based on the particular study
Lit. Review: limited; may not be exhaustive
Setting: naturalistic, when and as much as possible
Sampling : for the purpose; not necessarily representative; for in-depth understanding
Measurement: narrative; ongoing
Design and Method: flexible, specified only generally; based on non-intervention, minimal disturbance, such as historical, ethnographic, or case studies
Data Collection: document collection, participant observation, informal interviews, field notes
Data Analysis: raw data is words/ ongoing; involves synthesis
Data Interpretation: tentative, reviewed on ongoing basis, speculative
Researchers using quantitative methods tend to:
Quantitative research: an operational description
Purpose: explain, predict or control phenomena through focused collection and analysis of numberical data
Approach: deductive; tries to be value-free/has objectives/ is outcome-oriented
Hypotheses : Specific, testable, and stated prior to study
Lit. Review: extensive; may significantly influence a particular study
Setting: controlled to the degree possible
Sampling: uses largest manageable random/randomized sample, to allow generalization of results to larger populations
Measurement: standardized, numberical; "at the end"
Design and Method: Strongly structured, specified in detail in advance; involves intervention, manipulation and control groups; descriptive, correlational, experimental
Data Collection: via instruments, surveys, experiments, semi-structured formal interviews, tests or questionnaires
Data Analysis: raw data is numbers; at end of study, usually statistical
Data Interpretation: formulated at end of study; stated as a degree of certainty
This page on qualitative and quantitative research has been adapted and expanded from a handout by Suzy Westenkirchner. Used with permission.
Images from https://www.editage.com/insights/qualitative-quantitative-or-mixed-methods-a-quick-guide-to-choose-the-right-design-for-your-research?refer-type=infographics.
Quantitative Research
Quantitative research is the methodology which researchers use to test theories about people’s attitudes and behaviors based on numerical and statistical evidence. Researchers sample a large number of users (e.g., through surveys) to indirectly obtain measurable, bias-free data about users in relevant situations.
“Quantification clarifies issues which qualitative analysis leaves fuzzy. It is more readily contestable and likely to be contested. It sharpens scholarly discussion, sparks off rival hypotheses, and contributes to the dynamics of the research process.” — Angus Maddison, Notable scholar of quantitative macro-economic history
See how quantitative research helps reveal cold, hard facts about users which you can interpret and use to improve your designs.
Quantitative research is a subset of user experience (UX) research . Unlike its softer, more individual-oriented “counterpart”, qualitative research , quantitative research means you collect statistical/numerical data to draw generalized conclusions about users’ attitudes and behaviors . Compare and contrast quantitative with qualitative research, below:
Qualitative Research
You Aim to Determine
The “what”, “where” & “when” of the users’ needs & problems – to help keep your project’s focus on track during development
The “why” – to get behind how users approach their problems in their world
Highly structured (e.g., surveys) – to gather data about what users do & find patterns in large user groups
Loosely structured (e.g., contextual inquiries) – to learn why users behave how they do & explore their opinions
Number of Representative Users
Ideally 30+
Often around 5
Level of Contact with Users
Less direct & more remote (e.g., analytics)
More direct & less remote (e.g., usability testing to examine users’ stress levels when they use your design)
Statistically
Reliable – if you have enough test users
Less reliable, with need for great care with handling non-numerical data (e.g., opinions), as your own opinions might influence findings
Quantitative research is often best done from early on in projects since it helps teams to optimally direct product development and avoid costly design mistakes later. As you typically get user data from a distance—i.e., without close physical contact with users—also applying qualitative research will help you investigate why users think and feel the ways they do. Indeed, in an iterative design process quantitative research helps you test the assumptions you and your design team develop from your qualitative research. Regardless of the method you use, with proper care you can gather objective and unbiased data – information which you can complement with qualitative approaches to build a fuller understanding of your target users. From there, you can work towards firmer conclusions and drive your design process towards a more realistic picture of how target users will ultimately receive your product.
Quantitative analysis helps you test your assumptions and establish clearer views of your users in their various contexts.
There are many quantitative research methods, and they help uncover different types of information on users. Some methods, such as A/B testing, are typically done on finished products, while others such as surveys could be done throughout a project’s design process. Here are some of the most helpful methods:
A/B testing – You test two or more versions of your design on users to find the most effective. Each variation differs by just one feature and may or may not affect how users respond. A/B testing is especially valuable for testing assumptions you’ve drawn from qualitative research. The only potential concerns here are scale—in that you’ll typically need to conduct it on thousands of users—and arguably more complexity in terms of considering the statistical significance involved.
Analytics – With tools such as Google Analytics, you measure metrics (e.g., page views, click-through rates) to build a picture (e.g., “How many users take how long to complete a task?”).
Desirability Studies – You measure an aspect of your product (e.g., aesthetic appeal) by typically showing it to participants and asking them to select from a menu of descriptive words. Their responses can reveal powerful insights (e.g., 78% associate the product/brand with “fashionable”).
Surveys and Questionnaires – When you ask for many users’ opinions, you will gain massive amounts of information. Keep in mind that you’ll have data about what users say they do, as opposed to insights into what they do . You can get more reliable results if you incentivize your participants well and use the right format.
Tree Testing – You remove the user interface so users must navigate the site and complete tasks using links alone. This helps you see if an issue is related to the user interface or information architecture.
Another powerful benefit of conducting quantitative research is that you can keep your stakeholders’ support with hard facts and statistics about your design’s performance—which can show what works well and what needs improvement—and prove a good return on investment. You can also produce reports to check statistics against different versions of your product and your competitors’ products.
Most quantitative research methods are relatively cheap. Since no single research method can help you answer all your questions, it’s vital to judge which method suits your project at the time/stage. Remember, it’s best to spend appropriately on a combination of quantitative and qualitative research from early on in development. Design improvements can be costly, and so you can estimate the value of implementing changes when you get the statistics to suggest that these changes will improve usability. Overall, you want to gather measurements objectively, where your personality, presence and theories won’t create bias.
Take our User Research course to see how to get the most from quantitative research.
See how quantitative research methods fit into your design research landscape .
This insightful piece shows the value of pairing quantitative with qualitative research .
Find helpful tips on combining quantitative research methods in mixed methods research .
Qualitative and quantitative research differ primarily in the data they produce. Quantitative research yields numerical data to test hypotheses and quantify patterns. It's precise and generalizable. Qualitative research, on the other hand, generates non-numerical data and explores meanings, interpretations, and deeper insights. Watch our video featuring Professor Alan Dix on different types of research methods.
This video elucidates the nuances and applications of both research types in the design field.
In quantitative research, determining a good sample size is crucial for the reliability of the results. William Hudson, CEO of Syntagm, emphasizes the importance of statistical significance with an example in our video.
He illustrates that even with varying results between design choices, we need to discern whether the differences are statistically significant or products of chance. This ensures the validity of the results, allowing for more accurate interpretations. Statistical tools like chi-square tests can aid in analyzing the results effectively. To delve deeper into these concepts, take William Hudson’s Data-Driven Design: Quantitative UX Research Course .
Quantitative research is crucial as it provides precise, numerical data that allows for high levels of statistical inference. Our video from William Hudson, CEO of Syntagm, highlights the importance of analytics in examining existing solutions.
Quantitative methods, like analytics and A/B testing, are pivotal for identifying areas for improvement, understanding user behaviors, and optimizing user experiences based on solid, empirical evidence. This empirical nature ensures that the insights derived are reliable, allowing for practical improvements and innovations. Perhaps most importantly, numerical data is useful to secure stakeholder buy-in and defend design decisions and proposals. Explore this approach in our Data-Driven Design: Quantitative Research for UX Research course and learn from William Hudson’s detailed explanations of when and why to use analytics in the research process.
After establishing initial requirements, statistical data is crucial for informed decisions through quantitative research. William Hudson, CEO of Syntagm, sheds light on the role of quantitative research throughout a typical project lifecycle in this video:
During the analysis and design phases, quantitative research helps validate user requirements and understand user behaviors. Surveys and analytics are standard tools, offering insights into user preferences and design efficacy. Quantitative research can also be used in early design testing, allowing for optimal design modifications based on user interactions and feedback, and it’s fundamental for A/B and multivariate testing once live solutions are available.
To write a compelling quantitative research question:
Create clear, concise, and unambiguous questions that address one aspect at a time.
Use common, short terms and provide explanations for unusual words.
Avoid leading, compound, and overlapping queries and ensure that questions are not vague or broad.
According to our video by William Hudson, CEO of Syntagm, quality and respondent understanding are vital in forming good questions.
He emphasizes the importance of addressing specific aspects and avoiding intimidating and confusing elements, such as extensive question grids or ranking questions, to ensure participant engagement and accurate responses. For more insights, see the article Writing Good Questions for Surveys .
Survey research is typically quantitative, collecting numerical data and statistical analysis to make generalizable conclusions. However, it can also have qualitative elements, mainly when it includes open-ended questions, allowing for expressive responses. Our video featuring the CEO of Syntagm, William Hudson, provides in-depth insights into when and how to effectively utilize surveys in the product or service lifecycle, focusing on user satisfaction and potential improvements.
He emphasizes the importance of surveys in triangulating data to back up qualitative research findings, ensuring we have a complete understanding of the user's requirements and preferences.
Descriptive research focuses on describing the subject being studied and getting answers to questions like what, where, when, and who of the research question. However, it doesn’t include the answers to the underlying reasons, or the “why” behind the answers obtained from the research. We can use both f qualitative and quantitative methods to conduct descriptive research. Descriptive research does not describe the methods, but rather the data gathered through the research (regardless of the methods used).
When we use quantitative research and gather numerical data, we can use statistical analysis to understand relationships between different variables. Here’s William Hudson, CEO of Syntagm with more on correlation and how we can apply tests such as Pearson’s r and Spearman Rank Coefficient to our data.
This helps interpret phenomena such as user experience by analyzing session lengths and conversion values, revealing whether variables like time spent on a page affect checkout values, for example.
Random Sampling: Each individual in the population has an equitable opportunity to be chosen, which minimizes biases and simplifies analysis.
Systematic Sampling: Selecting every k-th item from a list after a random start. It's simpler and faster than random sampling when dealing with large populations.
Stratified Sampling: Segregate the population into subgroups or strata according to comparable characteristics. Then, samples are taken randomly from each stratum.
Cluster Sampling: Divide the population into clusters and choose a random sample.
Multistage Sampling: Various sampling techniques are used at different stages to collect detailed information from diverse populations.
Convenience Sampling: The researcher selects the sample based on availability and willingness to participate, which may only represent part of the population.
Quota Sampling: Segment the population into subgroups, and samples are non-randomly selected to fulfill a predetermined quota from each subset.
These are just a few techniques, and choosing the right one depends on your research question, discipline, resource availability, and the level of accuracy required. In quantitative research, there isn't a one-size-fits-all sampling technique; choosing a method that aligns with your research goals and population is critical. However, a well-planned strategy is essential to avoid wasting resources and time, as highlighted in our video featuring William Hudson, CEO of Syntagm.
He emphasizes the importance of recruiting participants meticulously, ensuring their engagement and the quality of their responses. Accurate and thoughtful participant responses are crucial for obtaining reliable results. William also sheds light on dealing with failing participants and scrutinizing response quality to refine the outcomes.
The 4 types of quantitative research are Descriptive, Correlational, Causal-Comparative/Quasi-Experimental, and Experimental Research. Descriptive research aims to depict ‘what exists’ clearly and precisely. Correlational research examines relationships between variables. Causal-comparative research investigates the cause-effect relationship between variables. Experimental research explores causal relationships by manipulating independent variables. To gain deeper insights into quantitative research methods in UX, consider enrolling in our Data-Driven Design: Quantitative Research for UX course.
The strength of quantitative research is its ability to provide precise numerical data for analyzing target variables.This allows for generalized conclusions and predictions about future occurrences, proving invaluable in various fields, including user experience. William Hudson, CEO of Syntagm, discusses the role of surveys, analytics, and testing in providing objective insights in our video on quantitative research methods, highlighting the significance of structured methodologies in eliciting reliable results.
To master quantitative research methods, enroll in our comprehensive course, Data-Driven Design: Quantitative Research for UX .
This course empowers you to leverage quantitative data to make informed design decisions, providing a deep dive into methods like surveys and analytics. Whether you’re a novice or a seasoned professional, this course at Interaction Design Foundation offers valuable insights and practical knowledge, ensuring you acquire the skills necessary to excel in user experience research. Explore our diverse topics to elevate your understanding of quantitative research methods.
What is the primary goal of quantitative research in design?
Which of the following methods is an example of quantitative research?
What is one key advantage of quantitative research?
What is a significant challenge of quantitative research?
How can designers effectively combine qualitative and quantitative research?
Do you want to improve your UX / UI Design skills? Join us now
You earned your gift with a perfect score! Let us send it to you.
We’ve emailed your gift to [email protected] .
Here’s the entire UX literature on Quantitative Research by the Interaction Design Foundation, collated in one place:
Take a deep dive into Quantitative Research with our course User Research – Methods and Best Practices .
How do you plan to design a product or service that your users will love , if you don't know what they want in the first place? As a user experience designer, you shouldn't leave it to chance to design something outstanding; you should make the effort to understand your users and build on that knowledge from the outset. User research is the way to do this, and it can therefore be thought of as the largest part of user experience design .
In fact, user research is often the first step of a UX design process—after all, you cannot begin to design a product or service without first understanding what your users want! As you gain the skills required, and learn about the best practices in user research, you’ll get first-hand knowledge of your users and be able to design the optimal product—one that’s truly relevant for your users and, subsequently, outperforms your competitors’ .
This course will give you insights into the most essential qualitative research methods around and will teach you how to put them into practice in your design work. You’ll also have the opportunity to embark on three practical projects where you can apply what you’ve learned to carry out user research in the real world . You’ll learn details about how to plan user research projects and fit them into your own work processes in a way that maximizes the impact your research can have on your designs. On top of that, you’ll gain practice with different methods that will help you analyze the results of your research and communicate your findings to your clients and stakeholders—workshops, user journeys and personas, just to name a few!
By the end of the course, you’ll have not only a Course Certificate but also three case studies to add to your portfolio. And remember, a portfolio with engaging case studies is invaluable if you are looking to break into a career in UX design or user research!
We believe you should learn from the best, so we’ve gathered a team of experts to help teach this course alongside our own course instructors. That means you’ll meet a new instructor in each of the lessons on research methods who is an expert in their field—we hope you enjoy what they have in store for you!
Best practices for qualitative user research.
We believe in Open Access and the democratization of knowledge . Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.
If you want this to change , cite this page , link to us, or join us to help us democratize design knowledge !
Our digital services use necessary tracking technologies, including third-party cookies, for security, functionality, and to uphold user rights. Optional cookies offer enhanced features, and analytics.
Experience the full potential of our site that remembers your preferences and supports secure sign-in.
Governs the storage of data necessary for maintaining website security, user authentication, and fraud prevention mechanisms.
Saves your settings and preferences, like your location, for a more personalized experience.
We use cookies to enable our referral program, giving you and your friends discounts.
We share user ID with Bugsnag and NewRelic to help us track errors and fix issues.
Optimize your experience by allowing us to monitor site usage. You’ll enjoy a smoother, more personalized journey without compromising your privacy.
Collects anonymous data on how you navigate and interact, helping us make informed improvements.
Differentiates real visitors from automated bots, ensuring accurate usage data and improving your website experience.
Lets us tailor your digital ads to match your interests, making them more relevant and useful to you.
Stores information for better-targeted advertising, enhancing your online ad experience.
Permits storing data to personalize content and ads across Google services based on user behavior, enhancing overall user experience.
Allows for content and ad personalization across Google services based on user behavior. This consent enhances user experiences.
Enables personalizing ads based on user data and interactions, allowing for more relevant advertising experiences across Google services.
Receive more relevant advertisements by sharing your interests and behavior with our trusted advertising partners.
Enables better ad targeting and measurement on Meta platforms, making ads you see more relevant.
Allows for improved ad effectiveness and measurement through Meta’s Conversions API, ensuring privacy-compliant data sharing.
Tracks conversions, retargeting, and web analytics for LinkedIn ad campaigns, enhancing ad relevance and performance.
Enhances LinkedIn advertising through server-side event tracking, offering more accurate measurement and personalization.
Tracks ad performance and user engagement, helping deliver ads that are most useful to you.
or copy link
Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.
Download our free ebook The Basics of User Experience Design to learn about core concepts of UX design.
In 9 chapters, we’ll cover: conducting user interviews, design thinking, interaction design, mobile UX design, usability, UX research, and many more!
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on April 12, 2019 by Raimo Streefkerk . Revised on June 22, 2023.
When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge.
Common quantitative methods include experiments, observations recorded as numbers, and surveys with closed-ended questions.
Quantitative research is at risk for research biases including information bias , omitted variable bias , sampling bias , or selection bias . Qualitative research Qualitative research is expressed in words . It is used to understand concepts, thoughts or experiences. This type of research enables you to gather in-depth insights on topics that are not well understood.
Common qualitative methods include interviews with open-ended questions, observations described in words, and literature reviews that explore concepts and theories.
The differences between quantitative and qualitative research, data collection methods, when to use qualitative vs. quantitative research, how to analyze qualitative and quantitative data, other interesting articles, frequently asked questions about qualitative and quantitative research.
Quantitative and qualitative research use different research methods to collect and analyze data, and they allow you to answer different kinds of research questions.
Quantitative and qualitative data can be collected using various methods. It is important to use a data collection method that will help answer your research question(s).
Many data collection methods can be either qualitative or quantitative. For example, in surveys, observational studies or case studies , your data can be represented as numbers (e.g., using rating scales or counting frequencies) or as words (e.g., with open-ended questions or descriptions of what you observe).
However, some methods are more commonly used in one type or the other.
A rule of thumb for deciding whether to use qualitative or quantitative data is:
For most research topics you can choose a qualitative, quantitative or mixed methods approach . Which type you choose depends on, among other things, whether you’re taking an inductive vs. deductive research approach ; your research question(s) ; whether you’re doing experimental , correlational , or descriptive research ; and practical considerations such as time, money, availability of data, and access to respondents.
You survey 300 students at your university and ask them questions such as: “on a scale from 1-5, how satisfied are your with your professors?”
You can perform statistical analysis on the data and draw conclusions such as: “on average students rated their professors 4.4”.
You conduct in-depth interviews with 15 students and ask them open-ended questions such as: “How satisfied are you with your studies?”, “What is the most positive aspect of your study program?” and “What can be done to improve the study program?”
Based on the answers you get you can ask follow-up questions to clarify things. You transcribe all interviews using transcription software and try to find commonalities and patterns.
You conduct interviews to find out how satisfied students are with their studies. Through open-ended questions you learn things you never thought about before and gain new insights. Later, you use a survey to test these insights on a larger scale.
It’s also possible to start with a survey to find out the overall trends, followed by interviews to better understand the reasons behind the trends.
Qualitative or quantitative data by itself can’t prove or demonstrate anything, but has to be analyzed to show its meaning in relation to the research questions. The method of analysis differs for each type of data.
Quantitative data is based on numbers. Simple math or more advanced statistical analysis is used to discover commonalities or patterns in the data. The results are often reported in graphs and tables.
Applications such as Excel, SPSS, or R can be used to calculate things like:
Qualitative data is more difficult to analyze than quantitative data. It consists of text, images or videos instead of numbers.
Some common approaches to analyzing qualitative data include:
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
The research methods you use depend on the type of data you need to answer your research question .
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.
There are various approaches to qualitative data analysis , but they all share five steps in common:
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Streefkerk, R. (2023, June 22). Qualitative vs. Quantitative Research | Differences, Examples & Methods. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/methodology/qualitative-quantitative-research/
Other students also liked, what is quantitative research | definition, uses & methods, what is qualitative research | methods & examples, mixed methods research | definition, guide & examples, what is your plagiarism score.
Home » 500+ Quantitative Research Titles and Topics
Table of Contents
Quantitative research involves collecting and analyzing numerical data to identify patterns, trends, and relationships among variables. This method is widely used in social sciences, psychology , economics , and other fields where researchers aim to understand human behavior and phenomena through statistical analysis. If you are looking for a quantitative research topic, there are numerous areas to explore, from analyzing data on a specific population to studying the effects of a particular intervention or treatment. In this post, we will provide some ideas for quantitative research topics that may inspire you and help you narrow down your interests.
Quantitative Research Titles are as follows:
Quantitative Research Topics are as follows:
Researcher, Academic Writer, Web developer
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
1 School of Social Sciences, Bangor University, Wales, UK
2 School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
3 School of Social Sciences, Cardiff University, Wales, UK
4 Department of Health Sciences, The University of York, York, UK
5 Department of Reproductive Health and Research including UNDP/UNFPA/UNICEF/WHO/World Bank Special Programme of Research, Development and Research Training in Human Reproduction (HRP), World Health Organization, Geneva, Switzerland
6 Department of Health Education and Promotion, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
bmjgh-2018-000893supp001.pdf
bmjgh-2018-000893supp002.pdf
bmjgh-2018-000893supp003.pdf
bmjgh-2018-000893supp005.pdf
bmjgh-2018-000893supp004.pdf
Guideline developers are increasingly dealing with more difficult decisions concerning whether to recommend complex interventions in complex and highly variable health systems. There is greater recognition that both quantitative and qualitative evidence can be combined in a mixed-method synthesis and that this can be helpful in understanding how complexity impacts on interventions in specific contexts. This paper aims to clarify the different purposes, review designs, questions, synthesis methods and opportunities to combine quantitative and qualitative evidence to explore the complexity of complex interventions and health systems. Three case studies of guidelines developed by WHO, which incorporated quantitative and qualitative evidence, are used to illustrate possible uses of mixed-method reviews and evidence. Additional examples of methods that can be used or may have potential for use in a guideline process are outlined. Consideration is given to the opportunities for potential integration of quantitative and qualitative evidence at different stages of the review and guideline process. Encouragement is given to guideline commissioners and developers and review authors to consider including quantitative and qualitative evidence. Recommendations are made concerning the future development of methods to better address questions in systematic reviews and guidelines that adopt a complexity perspective.
Recognition has grown that while quantitative methods remain vital, they are usually insufficient to address complex health systems related research questions. 1 Quantitative methods rely on an ability to anticipate what must be measured in advance. Introducing change into a complex health system gives rise to emergent reactions, which cannot be fully predicted in advance. Emergent reactions can often only be understood through combining quantitative methods with a more flexible qualitative lens. 2 Adopting a more pluralist position enables a diverse range of research options to the researcher depending on the research question being investigated. 3–5 As a consequence, where a research study sits within the multitude of methods available is driven by the question being asked, rather than any particular methodological or philosophical stance. 6
Publication of guidance on designing complex intervention process evaluations and other works advocating mixed-methods approaches to intervention research have stimulated better quality evidence for synthesis. 1 7–13 Methods for synthesising qualitative 14 and mixed-method evidence have been developed or are in development. Mixed-method research and review definitions are outlined in box 1 .
Pluye and Hong 52 define mixed-methods research as “a research approach in which a researcher integrates (a) qualitative and quantitative research questions, (b) qualitative research methods* and quantitative research designs, (c) techniques for collecting and analyzing qualitative and quantitative evidence, and (d) qualitative findings and quantitative results”.A mixed-method synthesis can integrate quantitative, qualitative and mixed-method evidence or data from primary studies.† Mixed-method primary studies are usually disaggregated into quantitative and qualitative evidence and data for the purposes of synthesis. Thomas and Harden further define three ways in which reviews are mixed. 53
*A qualitative study is one that uses qualitative methods of data collection and analysis to produce a narrative understanding of the phenomena of interest. Qualitative methods of data collection may include, for example, interviews, focus groups, observations and analysis of documents.
†The Cochrane Qualitative and Implementation Methods group coined the term ‘qualitative evidence synthesis’ to mean that the synthesis could also include qualitative data. For example, qualitative data from case studies, grey literature reports and open-ended questions from surveys. ‘Evidence’ and ‘data’ are used interchangeably in this paper.
This paper is one of a series that aims to explore the implications of complexity for systematic reviews and guideline development, commissioned by WHO. This paper is concerned with the methodological implications of including quantitative and qualitative evidence in mixed-method systematic reviews and guideline development for complex interventions. The guidance was developed through a process of bringing together experts in the field, literature searching and consensus building with end users (guideline developers, clinicians and reviewers). We clarify the different purposes, review designs, questions and synthesis methods that may be applicable to combine quantitative and qualitative evidence to explore the complexity of complex interventions and health systems. Three case studies of WHO guidelines that incorporated quantitative and qualitative evidence are used to illustrate possible uses of mixed-method reviews and mechanisms of integration ( table 1 , online supplementary files 1–3 ). Additional examples of methods that can be used or may have potential for use in a guideline process are outlined. Opportunities for potential integration of quantitative and qualitative evidence at different stages of the review and guideline process are presented. Specific considerations when using an evidence to decision framework such as the Developing and Evaluating Communication strategies to support Informed Decisions and practice based on Evidence (DECIDE) framework 15 or the new WHO-INTEGRATE evidence to decision framework 16 at the review design and evidence to decision stage are outlined. See online supplementary file 4 for an example of a health systems DECIDE framework and Rehfuess et al 16 for the new WHO-INTEGRATE framework. Encouragement is given to guideline commissioners and developers and review authors to consider including quantitative and qualitative evidence in guidelines of complex interventions that take a complexity perspective and health systems focus.
Designs and methods and their use or applicability in guidelines and systematic reviews taking a complexity perspective
Case study examples and references | Complexity-related questions of interest in the guideline | Types of synthesis used in the guideline | Mixed-method review design and integration mechanisms | Observations, concerns and considerations |
A. Mixed-method review designs used in WHO guideline development | ||||
Antenatal Care (ANC) guidelines ( ) | What do women in high-income, medium-income and low-income countries want and expect from antenatal care (ANC), based on their own accounts of their beliefs, views, expectations and experiences of pregnancy? | Qualitative synthesis Framework synthesis Meta-ethnography | Quantitative and qualitative reviews undertaken separately (segregated), an initial scoping review of qualitative evidence established women’s preferences and outcomes for ANC, which informed design of the quantitative intervention review (contingent) A second qualitative evidence synthesis was undertaken to look at implementation factors (sequential) Integration: quantitative and qualitative findings were brought together in a series of DECIDE frameworks Tools included: Psychological theory SURE framework conceptual framework for implementing policy options Conceptual framework for analysing integration of targeted health interventions into health systems to analyse contextual health system factors | An innovative approach to guideline development No formal cross-study synthesis process and limited testing of theory. The hypothetical nature of meta-ethnography findings may be challenging for guideline panel members to process without additional training See Flemming for considerations when selecting meta-ethnography |
What are the evidence-based practices during ANC that improved outcomes and lead to positive pregnancy experience and how should these practices be delivered? | Quantitative review of trials | |||
Factors that influence the uptake of routine antenatal services by pregnant women Views and experiences of maternity care providers | Qualitative synthesis Framework synthesis Meta-ethnography | |||
Task shifting guidelines ( ) | What are the effects of lay health worker interventions in primary and community healthcare on maternal and child health and the management of infectious diseases? | Quantitative review of trials | Several published quantitative reviews were used (eg, Cochrane review of lay health worker interventions) Additional new qualitative evidence syntheses were commissioned (segregated) Integration: quantitative and qualitative review findings on lay health workers were brought together in several DECIDE frameworks. Tools included adapted SURE Framework and post hoc logic model | An innovative approach to guideline development The post hoc logic model was developed after the guideline was completed |
What factors affect the implementation of lay health worker programmes for maternal and child health? | Qualitative evidence synthesis Framework synthesis | |||
Risk communication guideline ( ) | Quantitative review of quantitative evidence (descriptive) Qualitative using framework synthesis | A knowledge map of studies was produced to identify the method, topic and geographical spread of evidence. Reviews first organised and synthesised evidence by method-specific streams and reported method-specific findings. Then similar findings across method-specific streams were grouped and further developed using all the relevant evidence Integration: where possible, quantitative and qualitative evidence for the same intervention and question was mapped against core DECIDE domains. Tools included framework using public health emergency model and disaster phases Very few trials were identified. Quantitative and qualitative evidence was used to construct a high level view of what appeared to work and what happened when similar broad groups of interventions or strategies were implemented in different contexts | Example of a fully integrated mixed-method synthesis. Without evidence of effect, it was highly challenging to populate a DECIDE framework | |
B. Mixed-method review designs that can be used in guideline development | ||||
Factors influencing children’s optimal fruit and vegetable consumption | Potential to explore theoretical, intervention and implementation complexity issues New question(s) of interest are developed and tested in a cross-study synthesis | Mixed-methods synthesis Each review typically has three syntheses: Statistical meta-analysis Qualitative thematic synthesis Cross-study synthesis | Aim is to generate and test theory from diverse body of literature Integration: used integrative matrix based on programme theory | Can be used in a guideline process as it fits with the current model of conducting method specific reviews separately then bringing the review products together |
C. Mixed-method review designs with the potential for use in guideline development | ||||
Interventions to promote smoke alarm ownership and function | Intervention effect and/or intervention implementation related questions within a system | Narrative synthesis (specifically Popay’s methodology) | Four stage approach to integrate quantitative (trials) with qualitative evidence Integration: initial theory and logic model used to integrate evidence of effect with qualitative case summaries. Tools used included tabulation, groupings and clusters, transforming data: constructing a common rubric, vote-counting as a descriptive tool, moderator variables and subgroup analyses, idea webbing/conceptual mapping, creating qualitative case descriptions, visual representation of relationship between study characteristics and results | Few published examples with the exception of Rodgers, who reinterpreted a Cochrane review on the same topic with narrative synthesis methodology. Methodology is complex. Most subsequent examples have only partially operationalised the methodology An intervention effect review will still be required to feed into the guideline process |
Factors affecting childhood immunisation | What factors explain complexity and causal pathways? | Bayesian synthesis of qualitative and quantitative evidence | Aim is theory-testing by fusing findings from qualitative and quantitative research Produces a set of weighted factors associated with/predicting the phenomenon under review | Not yet used in a guideline context. Complex methodology. Undergoing development and testing for a health context. The end product may not easily ‘fit’ into an evidence to decision framework and an effect review will still be required |
Providing effective and preferred care closer to home: a realist review of intermediate care. | Developing and testing theories of change underpinning complex policy interventions What works for whom in what contexts and how? | Realist synthesis NB. Other theory-informed synthesis methods follow similar processes | Development of a theory from the literature, analysis of quantitative and qualitative evidence against the theory leads to development of context, mechanism and outcome chains that explain how outcomes come about Integration: programme theory and assembling mixed-method evidence to create Context, Mechanism and Outcome (CMO) configurations | May be useful where there are few trials. The hypothetical nature of findings may be challenging for guideline panel members to process without additional training. The end product may not easily ‘fit’ into an evidence to decision framework and an effect review will still be required |
Use of morphine to treat cancer-related pain | Any aspect of complexity could potentially be explored How does the context of morphine use affect the established effectiveness of morphine? | Critical interpretive synthesis | Aims to generate theory from large and diverse body of literature Segregated sequential design Integration: integrative grid | There are few examples and the methodology is complex. The hypothetical nature of findings may be challenging for guideline panel members to process without additional training. The end product would need to be designed to feed into an evidence to decision framework and an intervention effect review will still be required |
Food sovereignty, food security and health equity | Examples have examined health system complexity To understand the state of knowledge on relationships between health equity—ie, health inequalities that are socially produced—and food systems, where the concepts of 'food security' and 'food sovereignty' are prominent Focused on eight pathways to health (in)equity through the food system: (1) Multi-Scalar Environmental, Social Context; (2) Occupational Exposures; (3) Environmental Change; (4) Traditional Livelihoods, Cultural Continuity; (5) Intake of Contaminants; (6) Nutrition; (7) Social Determinants of Health; (8) Political, Economic and Regulatory context | Meta-narrative | Aim is to review research on diffusion of innovation to inform healthcare policy Which research (or epistemic) traditions have considered this broad topic area?; How has each tradition conceptualised the topic (for example, including assumptions about the nature of reality, preferred study designs and ways of knowing)?; What theoretical approaches and methods did they use?; What are the main empirical findings?; and What insights can be drawn by combining and comparing findings from different traditions? Integration: analysis leads to production of a set of meta-narratives (‘storylines of research’) | Not yet used in a guideline context. The originators are calling for meta-narrative reviews to be used in a guideline process. Potential to provide a contextual overview within which to interpret other types of reviews in a guideline process. The meta-narrative review findings may require tailoring to ‘fit’ into an evidence to decision framework and an intervention effect review will still be required Few published examples and the methodology is complex |
Taking a complexity perspective.
The first paper in this series 17 outlines aspects of complexity associated with complex interventions and health systems that can potentially be explored by different types of evidence, including synthesis of quantitative and qualitative evidence. Petticrew et al 17 distinguish between a complex interventions perspective and a complex systems perspective. A complex interventions perspective defines interventions as having “implicit conceptual boundaries, representing a flexible, but common set of practices, often linked by an explicit or implicit theory about how they work”. A complex systems perspective differs in that “ complexity arises from the relationships and interactions between a system’s agents (eg, people, or groups that interact with each other and their environment), and its context. A system perspective conceives the intervention as being part of the system, and emphasises changes and interconnections within the system itself”. Aspects of complexity associated with implementation of complex interventions in health systems that could potentially be addressed with a synthesis of quantitative and qualitative evidence are summarised in table 2 . Another paper in the series outlines criteria used in a new evidence to decision framework for making decisions about complex interventions implemented in complex systems, against which the need for quantitative and qualitative evidence can be mapped. 16 A further paper 18 that explores how context is dealt with in guidelines and reviews taking a complexity perspective also recommends using both quantitative and qualitative evidence to better understand context as a source of complexity. Mixed-method syntheses of quantitative and qualitative evidence can also help with understanding of whether there has been theory failure and or implementation failure. The Cochrane Qualitative and Implementation Methods Group provide additional guidance on exploring implementation and theory failure that can be adapted to address aspects of complexity of complex interventions when implemented in health systems. 19
Health-system complexity-related questions that a synthesis of quantitative and qualitative evidence could address (derived from Petticrew et al 17 )
Aspect of complexity of interest | Examples of potential research question(s) that a synthesis of qualitative and quantitative evidence could address | Types of studies or data that could contribute to a review of qualitative and quantitative evidence |
What ‘is’ the system? How can it be described? | What are the main influences on the health problem? How are they created and maintained? How do these influences interconnect? Where might one intervene in the system? | Quantitative: previous systematic reviews of the causes of the problem); epidemiological studies (eg, cohort studies examining risk factors of obesity); network analysis studies showing the nature of social and other systems Qualitative data: theoretical papers; policy documents |
Interactions of interventions with context and adaptation | Qualitative: (1) eg, qualitative studies; case studies Quantitative: (2) trials or other effectiveness studies from different contexts; multicentre trials, with stratified reporting of findings; other quantitative studies that provide evidence of moderating effects of context | |
System adaptivity (how does the system change?) | (How) does the system change when the intervention is introduced? Which aspects of the system are affected? Does this potentiate or dampen its effects? | Quantitative: longitudinal data; possibly historical data; effectiveness studies providing evidence of differential effects across different contexts; system modelling (eg, agent-based modelling) Qualitative: qualitative studies; case studies |
Emergent properties | What are the effects (anticipated and unanticipated) which follow from this system change? | Quantitative: prospective quantitative evaluations; retrospective studies (eg, case–control studies, surveys) may also help identify less common effects; dose–response evaluations of impacts at aggregate level in individual studies or across studies included with systematic reviews (see suggested examples) Qualitative: qualitative studies |
Positive (reinforcing) and negative (balancing) feedback loops | What explains change in the effectiveness of the intervention over time? Are the effects of an intervention are damped/suppressed by other aspects of the system (eg, contextual influences?) | Quantitative: studies of moderators of effectiveness; long-term longitudinal studies Qualitative: studies of factors that enable or inhibit implementation of interventions |
Multiple (health and non-health) outcomes | What changes in processes and outcomes follow the introduction of this system change? At what levels in the system are they experienced? | Quantitative: studies tracking change in the system over time Qualitative: studies exploring effects of the change in individuals, families, communities (including equity considerations and factors that affect engagement and participation in change) |
It may not be apparent which aspects of complexity or which elements of the complex intervention or health system can be explored in a guideline process, or whether combining qualitative and quantitative evidence in a mixed-method synthesis will be useful, until the available evidence is scoped and mapped. 17 20 A more extensive lead in phase is typically required to scope the available evidence, engage with stakeholders and to refine the review parameters and questions that can then be mapped against potential review designs and methods of synthesis. 20 At the scoping stage, it is also common to decide on a theoretical perspective 21 or undertake further work to refine a theoretical perspective. 22 This is also the stage to begin articulating the programme theory of the complex intervention that may be further developed to refine an understanding of complexity and show how the intervention is implemented in and impacts on the wider health system. 17 23 24 In practice, this process can be lengthy, iterative and fluid with multiple revisions to the review scope, often developing and adapting a logic model 17 as the available evidence becomes known and the potential to incorporate different types of review designs and syntheses of quantitative and qualitative evidence becomes better understood. 25 Further questions, propositions or hypotheses may emerge as the reviews progress and therefore the protocols generally need to be developed iteratively over time rather than a priori.
Following a scoping exercise and definition of key questions, the next step in the guideline development process is to identify existing or commission new systematic reviews to locate and summarise the best available evidence in relation to each question. For example, case study 2, ‘Optimising health worker roles for maternal and newborn health through task shifting’, included quantitative reviews that did and did not take an additional complexity perspective, and qualitative evidence syntheses that were able to explain how specific elements of complexity impacted on intervention outcomes within the wider health system. Further understanding of health system complexity was facilitated through the conduct of additional country-level case studies that contributed to an overall understanding of what worked and what happened when lay health worker interventions were implemented. See table 1 online supplementary file 2 .
There are a few existing examples, which we draw on in this paper, but integrating quantitative and qualitative evidence in a mixed-method synthesis is relatively uncommon in a guideline process. Box 2 includes a set of key questions that guideline developers and review authors contemplating combining quantitative and qualitative evidence in mixed-methods design might ask. Subsequent sections provide more information and signposting to further reading to help address these key questions.
Compound questions requiring both quantitative and qualitative evidence?
Questions requiring mixed-methods studies?
Separate quantitative and qualitative questions?
Separate quantitative and qualitative research studies?
Related quantitative and qualitative research studies?
Mixed-methods studies?
Quantitative unpublished data and/or qualitative unpublished data, eg, narrative survey data?
Throughout the review?
Following separate reviews?
At the question point?
At the synthesis point?
At the evidence to recommendations stage?
Or a combination?
Narrative synthesis or summary?
Quantitising approach, eg, frequency analysis?
Qualitising approach, eg, thematic synthesis?
Tabulation?
Logic model?
Conceptual model/framework?
Graphical approach?
Petticrew et al 17 define the different aspects of complexity and examples of complexity-related questions that can potentially be explored in guidelines and systematic reviews taking a complexity perspective. Relevant aspects of complexity outlined by Petticrew et al 17 are summarised in table 2 below, together with the corresponding questions that could be addressed in a synthesis combining qualitative and quantitative evidence. Importantly, the aspects of complexity and their associated concepts of interest have however yet to be translated fully in primary health research or systematic reviews. There are few known examples where selected complexity concepts have been used to analyse or reanalyse a primary intervention study. Most notable is Chandler et al 26 who specifically set out to identify and translate a set of relevant complexity theory concepts for application in health systems research. Chandler then reanalysed a trial process evaluation using selected complexity theory concepts to better understand the complex causal pathway in the health system that explains some aspects of complexity in table 2 .
Rehfeuss et al 16 also recommends upfront consideration of the WHO-INTEGRATE evidence to decision criteria when planning a guideline and formulating questions. The criteria reflect WHO norms and values and take account of a complexity perspective. The framework can be used by guideline development groups as a menu to decide which criteria to prioritise, and which study types and synthesis methods can be used to collect evidence for each criterion. Many of the criteria and their related questions can be addressed using a synthesis of quantitative and qualitative evidence: the balance of benefits and harms, human rights and sociocultural acceptability, health equity, societal implications and feasibility (see table 3 ). Similar aspects in the DECIDE framework 15 could also be addressed using synthesis of qualitative and quantitative evidence.
Integrate evidence to decision framework criteria, example questions and types of studies to potentially address these questions (derived from Rehfeuss et al 16 )
Domains of the WHO-INTEGRATE EtD framework | Examples of potential research question(s) that a synthesis of qualitative and/or quantitative evidence could address | Types of studies that could contribute to a review of qualitative and quantitative evidence |
Balance of benefits and harms | To what extent do patients/beneficiaries different health outcomes? | Qualitative: studies of views and experiences Quantitative: Questionnaire surveys |
Human rights and sociocultural acceptability | Is the intervention to patients/beneficiaries as well as to those implementing it? To what extent do patients/beneficiaries different non-health outcomes? How does the intervention affect an individual’s, population group’s or organisation’s , that is, their ability to make a competent, informed and voluntary decision? | Qualitative: discourse analysis, qualitative studies (ideally longitudinal to examine changes over time) Quantitative: pro et contra analysis, discrete choice experiments, longitudinal quantitative studies (to examine changes over time), cross-sectional studies Mixed-method studies; case studies |
Health equity, equality and non-discrimination | How is the intervention for individuals, households or communities? How —in terms of physical as well as informational access—is the intervention across different population groups? | Qualitative: studies of views and experiences Quantitative: cross-sectional or longitudinal observational studies, discrete choice experiments, health expenditure studies; health system barrier studies, cross-sectional or longitudinal observational studies, discrete choice experiments, ethical analysis, GIS-based studies |
Societal implications | What is the of the intervention: are there features of the intervention that increase or reduce stigma and that lead to social consequences? Does the intervention enhance or limit social goals, such as education, social cohesion and the attainment of various human rights beyond health? Does it change social norms at individual or population level? What is the of the intervention? Does it contribute to or limit the achievement of goals to protect the environment and efforts to mitigate or adapt to climate change? | Qualitative: studies of views and experiences Quantitative: RCTs, quasi-experimental studies, comparative observational studies, longitudinal implementation studies, case studies, power analyses, environmental impact assessments, modelling studies |
Feasibility and health system considerations | Are there any that impact on implementation of the intervention? How might , such as past decisions and strategic considerations, positively or negatively impact the implementation of the intervention? How does the intervention ? Is it likely to fit well or not, is it likely to impact on it in positive or negative ways? How does the intervention interact with the need for and usage of the existing , at national and subnational levels? How does the intervention interact with the need for and usage of the as well as other relevant infrastructure, at national and subnational levels? | Non-research: policy and regulatory frameworks Qualitative: studies of views and experiences Mixed-method: health systems research, situation analysis, case studies Quantitative: cross-sectional studies |
GIS, Geographical Information System; RCT, randomised controlled trial.
Questions can serve as an ‘anchor’ by articulating the specific aspects of complexity to be explored (eg, Is successful implementation of the intervention context dependent?). 27 Anchor questions such as “How does intervention x impact on socioeconomic inequalities in health behaviour/outcome x” are the kind of health system question that requires a synthesis of both quantitative and qualitative evidence and hence a mixed-method synthesis. Quantitative evidence can quantify the difference in effect, but does not answer the question of how . The ‘how’ question can be partly answered with quantitative and qualitative evidence. For example, quantitative evidence may reveal where socioeconomic status and inequality emerges in the health system (an emergent property) by exploring questions such as “ Does patterning emerge during uptake because fewer people from certain groups come into contact with an intervention in the first place? ” or “ are people from certain backgrounds more likely to drop out, or to maintain effects beyond an intervention differently? ” Qualitative evidence may help understand the reasons behind all of these mechanisms. Alternatively, questions can act as ‘compasses’ where a question sets out a starting point from which to explore further and to potentially ask further questions or develop propositions or hypotheses to explore through a complexity perspective (eg, What factors enhance or hinder implementation?). 27 Other papers in this series provide further guidance on developing questions for qualitative evidence syntheses and guidance on question formulation. 14 28
For anchor and compass questions, additional application of a theory (eg, complexity theory) can help focus evidence synthesis and presentation to explore and explain complexity issues. 17 21 Development of a review specific logic model(s) can help to further refine an initial understanding of any complexity-related issues of interest associated with a specific intervention, and if appropriate the health system or section of the health system within which to contextualise the review question and analyse data. 17 23–25 Specific tools are available to help clarify context and complex interventions. 17 18
If a complexity perspective, and certain criteria within evidence to decision frameworks, is deemed relevant and desirable by guideline developers, it is only possible to pursue a complexity perspective if the evidence is available. Careful scoping using knowledge maps or scoping reviews will help inform development of questions that are answerable with available evidence. 20 If evidence of effect is not available, then a different approach to develop questions leading to a more general narrative understanding of what happened when complex interventions were implemented in a health system will be required (such as in case study 3—risk communication guideline). This should not mean that the original questions developed for which no evidence was found when scoping the literature were not important. An important function of creating a knowledge map is also to identify gaps to inform a future research agenda.
Table 2 and online supplementary files 1–3 outline examples of questions in the three case studies, which were all ‘COMPASS’ questions for the qualitative evidence syntheses.
The shift towards integration of qualitative and quantitative evidence in primary research has, in recent years, begun to be mirrored within research synthesis. 29–31 The natural extension to undertaking quantitative or qualitative reviews has been the development of methods for integrating qualitative and quantitative evidence within reviews, and within the guideline process using evidence to decision-frameworks. Advocating the integration of quantitative and qualitative evidence assumes a complementarity between research methodologies, and a need for both types of evidence to inform policy and practice. Below, we briefly outline the current designs for integrating qualitative and quantitative evidence within a mixed-method review or synthesis.
One of the early approaches to integrating qualitative and quantitative evidence detailed by Sandelowski et al 32 advocated three basic review designs: segregated, integrated and contingent designs, which have been further developed by Heyvaert et al 33 ( box 3 ).
Segregated design.
Conventional separate distinction between quantitative and qualitative approaches based on the assumption they are different entities and should be treated separately; can be distinguished from each other; their findings warrant separate analyses and syntheses. Ultimately, the separate synthesis results can themselves be synthesised.
The methodological differences between qualitative and quantitative studies are minimised as both are viewed as producing findings that can be readily synthesised into one another because they address the same research purposed and questions. Transformation involves either turning qualitative data into quantitative (quantitising) or quantitative findings are turned into qualitative (qualitising) to facilitate their integration.
Takes a cyclical approach to synthesis, with the findings from one synthesis informing the focus of the next synthesis, until all the research objectives have been addressed. Studies are not necessarily grouped and categorised as qualitative or quantitative.
A recent review of more than 400 systematic reviews 34 combining quantitative and qualitative evidence identified two main synthesis designs—convergent and sequential. In a convergent design, qualitative and quantitative evidence is collated and analysed in a parallel or complementary manner, whereas in a sequential synthesis, the collation and analysis of quantitative and qualitative evidence takes place in a sequence with one synthesis informing the other ( box 4 ). 6 These designs can be seen to build on the work of Sandelowski et al , 32 35 particularly in relation to the transformation of data from qualitative to quantitative (and vice versa) and the sequential synthesis design, with a cyclical approach to reviewing that evokes Sandelowski’s contingent design.
Convergent synthesis design.
Qualitative and quantitative research is collected and analysed at the same time in a parallel or complementary manner. Integration can occur at three points:
a. Data-based convergent synthesis design
All included studies are analysed using the same methods and results presented together. As only one synthesis method is used, data transformation occurs (qualitised or quantised). Usually addressed one review question.
b. Results-based convergent synthesis design
Qualitative and quantitative data are analysed and presented separately but integrated using a further synthesis method; eg, narratively, tables, matrices or reanalysing evidence. The results of both syntheses are combined in a third synthesis. Usually addresses an overall review question with subquestions.
c. Parallel-results convergent synthesis design
Qualitative and quantitative data are analysed and presented separately with integration occurring in the interpretation of results in the discussion section. Usually addresses two or more complimentary review questions.
A two-phase approach, data collection and analysis of one type of evidence (eg, qualitative), occurs after and is informed by the collection and analysis of the other type (eg, quantitative). Usually addresses an overall question with subquestions with both syntheses complementing each other.
The three case studies ( table 1 , online supplementary files 1–3 ) illustrate the diverse combination of review designs and synthesis methods that were considered the most appropriate for specific guidelines.
In this section, we draw on examples where specific review designs and methods have been or can be used to explore selected aspects of complexity in guidelines or systematic reviews. We also identify other review methods that could potentially be used to explore aspects of complexity. Of particular note, we could not find any specific examples of systematic methods to synthesise highly diverse research designs as advocated by Petticrew et al 17 and summarised in tables 2 and 3 . For example, we could not find examples of methods to synthesise qualitative studies, case studies, quantitative longitudinal data, possibly historical data, effectiveness studies providing evidence of differential effects across different contexts, and system modelling studies (eg, agent-based modelling) to explore system adaptivity.
There are different ways that quantitative and qualitative evidence can be integrated into a review and then into a guideline development process. In practice, some methods enable integration of different types of evidence in a single synthesis, while in other methods, the single systematic review may include a series of stand-alone reviews or syntheses that are then combined in a cross-study synthesis. Table 1 provides an overview of the characteristics of different review designs and methods and guidance on their applicability for a guideline process. Designs and methods that have already been used in WHO guideline development are described in part A of the table. Part B outlines a design and method that can be used in a guideline process, and part C covers those that have the potential to integrate quantitative, qualitative and mixed-method evidence in a single review design (such as meta-narrative reviews and Bayesian syntheses), but their application in a guideline context has yet to be demonstrated.
Depending on the review design (see boxes 3 and 4 ), integration can potentially take place at a review team and design level, and more commonly at several key points of the review or guideline process. The following sections outline potential points of integration and associated practical considerations when integrating quantitative and qualitative evidence in guideline development.
In a guideline process, it is common for syntheses of quantitative and qualitative evidence to be done separately by different teams and then to integrate the evidence. A practical consideration relates to the organisation, composition and expertise of the review teams and ways of working. If the quantitative and qualitative reviews are being conducted separately and then brought together by the same team members, who are equally comfortable operating within both paradigms, then a consistent approach across both paradigms becomes possible. If, however, a team is being split between the quantitative and qualitative reviews, then the strengths of specialisation can be harnessed, for example, in quality assessment or synthesis. Optimally, at least one, if not more, of the team members should be involved in both quantitative and qualitative reviews to offer the possibility of making connexions throughout the review and not simply at re-agreed junctures. This mirrors O’Cathain’s conclusion that mixed-methods primary research tends to work only when there is a principal investigator who values and is able to oversee integration. 9 10 While the above decisions have been articulated in the context of two types of evidence, variously quantitative and qualitative, they equally apply when considering how to handle studies reporting a mixed-method study design, where data are usually disaggregated into quantitative and qualitative for the purposes of synthesis (see case study 3—risk communication in humanitarian disasters).
Clearly specified key question(s), derived from a scoping or consultation exercise, will make it clear if quantitative and qualitative evidence is required in a guideline development process and which aspects will be addressed by which types of evidence. For the remaining stages of the process, as documented below, a review team faces challenges as to whether to handle each type of evidence separately, regardless of whether sequentially or in parallel, with a view to joining the two products on completion or to attempt integration throughout the review process. In each case, the underlying choice is of efficiencies and potential comparability vs sensitivity to the underlying paradigm.
Once key questions are clearly defined, the guideline development group typically needs to consider whether to conduct a single sensitive search to address all potential subtopics (lumping) or whether to conduct specific searches for each subtopic (splitting). 36 A related consideration is whether to search separately for qualitative, quantitative and mixed-method evidence ‘streams’ or whether to conduct a single search and then identify specific study types at the subsequent sifting stage. These two considerations often mean a trade-off between a single search process involving very large numbers of records or a more protracted search process retrieving smaller numbers of records. Both approaches have advantages and choice may depend on the respective availability of resources for searching and sifting.
Closely related to decisions around searching are considerations relating to screening and selecting studies for inclusion in a systematic review. An important consideration here is whether the review team will screen records for all review types, regardless of their subsequent involvement (‘altruistic sifting’), or specialise in screening for the study type with which they are most familiar. The risk of missing relevant reports might be minimised by whole team screening for empirical reports in the first instance and then coding them for a specific quantitative, qualitative or mixed-methods report at a subsequent stage.
Within a guideline process, review teams may be more limited in their choice of instruments to assess methodological limitations of primary studies as there are mandatory requirements to use the Cochrane risk of bias tool 37 to feed into Grading of Recommendations Assessment, Development and Evaluation (GRADE) 38 or to select from a small pool of qualitative appraisal instruments in order to apply GRADE; Confidence in the Evidence from Reviews of Qualitative Research (GRADE-CERQual) 39 to assess the overall certainty or confidence in findings. The Cochrane Qualitative and Implementation Methods Group has recently issued guidance on the selection of appraisal instruments and core assessment criteria. 40 The Mixed-Methods Appraisal Tool, which is currently undergoing further development, offers a single quality assessment instrument for quantitative, qualitative and mixed-methods studies. 41 Other options include using corresponding instruments from within the same ‘stable’, for example, using different Critical Appraisal Skills Programme instruments. 42 While using instruments developed by the same team or organisation may achieve a degree of epistemological consonance, benefits may come more from consistency of approach and reporting rather than from a shared view of quality. Alternatively, a more paradigm-sensitive approach would involve selecting the best instrument for each respective review while deferring challenges from later heterogeneity of reporting.
The way in which data and evidence are extracted from primary research studies for review will be influenced by the type of integrated synthesis being undertaken and the review purpose. Initially, decisions need to be made regarding the nature and type of data and evidence that are to be extracted from the included studies. Method-specific reporting guidelines 43 44 provide a good template as to what quantitative and qualitative data it is potentially possible to extract from different types of method-specific study reports, although in practice reporting quality varies. Online supplementary file 5 provides a hypothetical example of the different types of studies from which quantitative and qualitative evidence could potentially be extracted for synthesis.
The decisions around what data or evidence to extract will be guided by how ‘integrated’ the mixed-method review will be. For those reviews where the quantitative and qualitative findings of studies are synthesised separately and integrated at the point of findings (eg, segregated or contingent approaches or sequential synthesis design), separate data extraction approaches will likely be used.
Where integration occurs during the process of the review (eg, integrated approach or convergent synthesis design), an integrated approach to data extraction may be considered, depending on the purpose of the review. This may involve the use of a data extraction framework, the choice of which needs to be congruent with the approach to synthesis chosen for the review. 40 45 The integrative or theoretical framework may be decided on a priori if a pre-developed theoretical or conceptual framework is available in the literature. 27 The development of a framework may alternatively arise from the reading of the included studies, in relation to the purpose of the review, early in the process. The Cochrane Qualitative and Implementation Methods Group provide further guidance on extraction of qualitative data, including use of software. 40
Relatively few synthesis methods start off being integrated from the beginning, and these methods have generally been subject to less testing and evaluation particularly in a guideline context (see table 1 ). A review design that started off being integrated from the beginning may be suitable for some guideline contexts (such as in case study 3—risk communication in humanitarian disasters—where there was little evidence of effect), but in general if there are sufficient trials then a separate systematic review and meta-analysis will be required for a guideline. Other papers in this series offer guidance on methods for synthesising quantitative 46 and qualitative evidence 14 in reviews that take a complexity perspective. Further guidance on integrating quantitative and qualitative evidence in a systematic review is provided by the Cochrane Qualitative and Implementation Methods Group. 19 27 29 40 47
It is highly likely (unless there are well-designed process evaluations) that the primary studies may not themselves seek to address the complexity-related questions required for a guideline process. In which case, review authors will need to configure the available evidence and transform the evidence through the synthesis process to produce explanations, propositions and hypotheses (ie, findings) that were not obvious at primary study level. It is important that guideline commissioners, developers and review authors are aware that specific methods are intended to produce a type of finding with a specific purpose (such as developing new theory in the case of meta-ethnography). 48 Case study 1 (antenatal care guideline) provides an example of how a meta-ethnography was used to develop a new theory as an end product, 48 49 as well as framework synthesis which produced descriptive and explanatory findings that were more easily incorporated into the guideline process. 27 The definitions ( box 5 ) may be helpful when defining the different types of findings.
Descriptive findings —qualitative evidence-driven translated descriptive themes that do not move beyond the primary studies.
Explanatory findings —may either be at a descriptive or theoretical level. At the descriptive level, qualitative evidence is used to explain phenomena observed in quantitative results, such as why implementation failed in specific circumstances. At the theoretical level, the transformed and interpreted findings that go beyond the primary studies can be used to explain the descriptive findings. The latter description is generally the accepted definition in the wider qualitative community.
Hypothetical or theoretical finding —qualitative evidence-driven transformed themes (or lines of argument) that go beyond the primary studies. Although similar, Thomas and Harden 56 make a distinction in the purposes between two types of theoretical findings: analytical themes and the product of meta-ethnographies, third-order interpretations. 48
Analytical themes are a product of interrogating descriptive themes by placing the synthesis within an external theoretical framework (such as the review question and subquestions) and are considered more appropriate when a specific review question is being addressed (eg, in a guideline or to inform policy). 56
Third-order interpretations come from translating studies into one another while preserving the original context and are more appropriate when a body of literature is being explored in and of itself with broader or emergent review questions. 48
A critical element of guideline development is the formulation of recommendations by the Guideline Development Group, and EtD frameworks help to facilitate this process. 16 The EtD framework can also be used as a mechanism to integrate and display quantitative and qualitative evidence and findings mapped against the EtD framework domains with hyperlinks to more detailed evidence summaries from contributing reviews (see table 1 ). It is commonly the EtD framework that enables the findings of the separate quantitative and qualitative reviews to be brought together in a guideline process. Specific challenges when populating the DECIDE evidence to decision framework 15 were noted in case study 3 (risk communication in humanitarian disasters) as there was an absence of intervention effect data and the interventions to communicate public health risks were context specific and varied. These problems would not, however, have been addressed by substitution of the DECIDE framework with the new INTEGRATE 16 evidence to decision framework. A d ifferent type of EtD framework needs to be developed for reviews that do not include sufficient evidence of intervention effect.
Mixed-method review and synthesis methods are generally the least developed of all systematic review methods. It is acknowledged that methods for combining quantitative and qualitative evidence are generally poorly articulated. 29 50 There are however some fairly well-established methods for using qualitative evidence to explore aspects of complexity (such as contextual, implementation and outcome complexity), which can be combined with evidence of effect (see sections A and B of table 1 ). 14 There are good examples of systematic reviews that use these methods to combine quantitative and qualitative evidence, and examples of guideline recommendations that were informed by evidence from both quantitative and qualitative reviews (eg, case studies 1–3). With the exception of case study 3 (risk communication), the quantitative and qualitative reviews for these specific guidelines have been conducted separately, and the findings subsequently brought together in an EtD framework to inform recommendations.
Other mixed-method review designs have potential to contribute to understanding of complex interventions and to explore aspects of wider health systems complexity but have not been sufficiently developed and tested for this specific purpose, or used in a guideline process (section C of table 1 ). Some methods such as meta-narrative reviews also explore different questions to those usually asked in a guideline process. Methods for processing (eg, quality appraisal) and synthesising the highly diverse evidence suggested in tables 2 and 3 that are required to explore specific aspects of health systems complexity (such as system adaptivity) and to populate some sections of the INTEGRATE EtD framework remain underdeveloped or in need of development.
In addition to the required methodological development mentioned above, there is no GRADE approach 38 for assessing confidence in findings developed from combined quantitative and qualitative evidence. Another paper in this series outlines how to deal with complexity and grading different types of quantitative evidence, 51 and the GRADE CERQual approach for qualitative findings is described elsewhere, 39 but both these approaches are applied to method-specific and not mixed-method findings. An unofficial adaptation of GRADE was used in the risk communication guideline that reported mixed-method findings. Nor is there a reporting guideline for mixed-method reviews, 47 and for now reports will need to conform to the relevant reporting requirements of the respective method-specific guideline. There is a need to further adapt and test DECIDE, 15 WHO-INTEGRATE 16 and other types of evidence to decision frameworks to accommodate evidence from mixed-method syntheses which do not set out to determine the statistical effects of interventions and in circumstances where there are no trials.
When conducting quantitative and qualitative reviews that will subsequently be combined, there are specific considerations for managing and integrating the different types of evidence throughout the review process. We have summarised different options for combining qualitative and quantitative evidence in mixed-method syntheses that guideline developers and systematic reviewers can choose from, as well as outlining the opportunities to integrate evidence at different stages of the review and guideline development process.
Review commissioners, authors and guideline developers generally have less experience of combining qualitative and evidence in mixed-methods reviews. In particular, there is a relatively small group of reviewers who are skilled at undertaking fully integrated mixed-method reviews. Commissioning additional qualitative and mixed-method reviews creates an additional cost. Large complex mixed-method reviews generally take more time to complete. Careful consideration needs to be given as to which guidelines would benefit most from additional qualitative and mixed-method syntheses. More training is required to develop capacity and there is a need to develop processes for preparing the guideline panel to consider and use mixed-method evidence in their decision-making.
This paper has presented how qualitative and quantitative evidence, combined in mixed-method reviews, can help understand aspects of complex interventions and the systems within which they are implemented. There are further opportunities to use these methods, and to further develop the methods, to look more widely at additional aspects of complexity. There is a range of review designs and synthesis methods to choose from depending on the question being asked or the questions that may emerge during the conduct of the synthesis. Additional methods need to be developed (or existing methods further adapted) in order to synthesise the full range of diverse evidence that is desirable to explore the complexity-related questions when complex interventions are implemented into health systems. We encourage review commissioners and authors, and guideline developers to consider using mixed-methods reviews and synthesis in guidelines and to report on their usefulness in the guideline development process.
Handling editor: Soumyadeep Bhaumik
Contributors: JN, AB, GM, KF, ÖT and ES drafted the manuscript. All authors contributed to paper development and writing and agreed the final manuscript. Anayda Portela and Susan Norris from WHO managed the series. Helen Smith was series Editor. We thank all those who provided feedback on various iterations.
Funding: Funding provided by the World Health Organization Department of Maternal, Newborn, Child and Adolescent Health through grants received from the United States Agency for International Development and the Norwegian Agency for Development Cooperation.
Disclaimer: ÖT is a staff member of WHO. The author alone is responsible for the views expressed in this publication and they do not necessarily represent the decisions or policies of WHO.
Competing interests: No financial interests declared. JN, AB and ÖT have an intellectual interest in GRADE CERQual; and JN has an intellectual interest in the iCAT_SR tool.
Patient consent: Not required.
Provenance and peer review: Not commissioned; externally peer reviewed.
Data sharing statement: No additional data are available.
Supplemental material: This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.
API Solutions
Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.
GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.
One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.
Speaker 1: Welcome to this overview of quantitative research methods. This tutorial will give you the big picture of quantitative research and introduce key concepts that will help you determine if quantitative methods are appropriate for your project study. First, what is educational research? Educational research is a process of scholarly inquiry designed to investigate the process of instruction and learning, the behaviors, perceptions, and attributes of students and teachers, the impact of institutional processes and policies, and all other areas of the educational process. The research design may be quantitative, qualitative, or a mixed methods design. The focus of this overview is quantitative methods. The general purpose of quantitative research is to explain, predict, investigate relationships, describe current conditions, or to examine possible impacts or influences on designated outcomes. Quantitative research differs from qualitative research in several ways. It works to achieve different goals and uses different methods and design. This table illustrates some of the key differences. Qualitative research generally uses a small sample to explore and describe experiences through the use of thick, rich descriptions of detailed data in an attempt to understand and interpret human perspectives. It is less interested in generalizing to the population as a whole. For example, when studying bullying, a qualitative researcher might learn about the experience of the victims and the experience of the bully by interviewing both bullies and victims and observing them on the playground. Quantitative studies generally use large samples to test numerical data by comparing or finding correlations among sample attributes so that the findings can be generalized to the population. If quantitative researchers were studying bullying, they might measure the effects of a bully on the victim by comparing students who are victims and students who are not victims of bullying using an attitudinal survey. In conducting quantitative research, the researcher first identifies the problem. For Ed.D. research, this problem represents a gap in practice. For Ph.D. research, this problem represents a gap in the literature. In either case, the problem needs to be of importance in the professional field. Next, the researcher establishes the purpose of the study. Why do you want to do the study, and what do you intend to accomplish? This is followed by research questions which help to focus the study. Once the study is focused, the researcher needs to review both seminal works and current peer-reviewed primary sources. Based on the research question and on a review of prior research, a hypothesis is created that predicts the relationship between the study's variables. Next, the researcher chooses a study design and methods to test the hypothesis. These choices should be informed by a review of methodological approaches used to address similar questions in prior research. Finally, appropriate analytical methods are used to analyze the data, allowing the researcher to draw conclusions and inferences about the data, and answer the research question that was originally posed. In quantitative research, research questions are typically descriptive, relational, or causal. Descriptive questions constrain the researcher to describing what currently exists. With a descriptive research question, one can examine perceptions or attitudes as well as more concrete variables such as achievement. For example, one might describe a population of learners by gathering data on their age, gender, socioeconomic status, and attributes towards their learning experiences. Relational questions examine the relationship between two or more variables. The X variable has some linear relationship to the Y variable. Causal inferences cannot be made from this type of research. For example, one could study the relationship between students' study habits and achievements. One might find that students using certain kinds of study strategies demonstrate greater learning, but one could not state conclusively that using certain study strategies will lead to or cause higher achievement. Causal questions, on the other hand, are designed to allow the researcher to draw a causal inference. A causal question seeks to determine if a treatment variable in a program had an effect on one or more outcome variables. In other words, the X variable influences the Y variable. For example, one could design a study that answered the question of whether a particular instructional approach caused students to learn more. The research question serves as a basis for posing a hypothesis, a predicted answer to the research question that incorporates operational definitions of the study's variables and is rooted in the literature. An operational definition matches a concept with a method of measurement, identifying how the concept will be quantified. For example, in a study of instructional strategies, the hypothesis might be that students of teachers who use Strategy X will exhibit greater learning than students of teachers who do not. In this study, one would need to operationalize learning by identifying a test or instrument that would measure learning. This approach allows the researcher to create a testable hypothesis. Relational and causal research relies on the creation of a null hypothesis, a version of the research hypothesis that predicts no relationship between variables or no effect of one variable on another. When writing the hypothesis for a quantitative question, the null hypothesis and the research or alternative hypothesis use parallel sentence structure. In this example, the null hypothesis states that there will be no statistical difference between groups, while the research or alternative hypothesis states that there will be a statistical difference between groups. Note also that both hypothesis statements operationalize the critical thinking skills variable by identifying the measurement instrument to be used. Once the research questions and hypotheses are solidified, the researcher must select a design that will create a situation in which the hypotheses can be tested and the research questions answered. Ideally, the research design will isolate the study's variables and control for intervening variables so that one can be certain of the relationships being tested. In educational research, however, it is extremely difficult to establish sufficient controls in the complex social settings being studied. In our example of investigating the impact of a certain instructional strategy in the classroom on student achievement, each day the teacher uses a specific instructional strategy. After school, some of the students in her class receive tutoring. Other students have parents that are very involved in their child's academic progress and provide learning experiences in the home. These students may do better because they received extra help, not because the teacher's instructional strategy is more effective. Unless the researcher can control for the intervening variable of extra help, it will be impossible to effectively test the study's hypothesis. Quantitative research designs can fall into two broad categories, experimental and quasi-experimental. Classic experimental designs are those that randomly assign subjects to either a control or treatment comparison group. The researcher can then compare the treatment group to the control group to test for an intervention's effect, known as a between-subject design. It is important to note that the control group may receive a standard treatment or may receive a treatment of any kind. Quasi-experimental designs do not randomly assign subjects to groups, but rather take advantage of existing groups. A researcher can still have a control and comparison group, but assignment to the groups is not random. The use of a control group is not required. However, the researcher may choose a design in which a single group is pre- and post-tested, known as a within-subjects design. Or a single group may receive only a post-test. Since quasi-experimental designs lack random assignment, the researcher should be aware of the threats to validity. Educational research often attempts to measure abstract variables such as attitudes, beliefs, and feelings. Surveys can capture data about these hard-to-measure variables, as well as other self-reported information such as demographic factors. A survey is an instrument used to collect verifiable information from a sample population. In quantitative research, surveys typically include questions that ask respondents to choose a rating from a scale, select one or more items from a list, or other responses that result in numerical data. Studies that use surveys or tests need to include strategies that establish the validity of the instrument used. There are many types of validity that need to be addressed. Face validity. Does the test appear at face value to measure what it is supposed to measure? Content validity. Content validity includes both item validity and sampling validity. Item validity ensures that the individual test items deal only with the subject being addressed. Sampling validity ensures that the range of item topics is appropriate to the subject being studied. For example, item validity might be high, but if all the items only deal with one aspect of the subjects, then sampling validity is low. Content validity can be established by having experts in the field review the test. Concurrent validity. Does a new test correlate with an older, established test that measures the same thing? Predictive validity. Does the test correlate with another related measure? For example, GRE tests are used at many colleges because these schools believe that a good grade on this test increases the probability that the student will do well at the college. Linear regression can establish the predictive validity of a test. Construct validity. Does the test measure the construct it is intended to measure? Establishing construct validity can be a difficult task when the constructs being measured are abstract. But it can be established by conducting a number of studies in which you test hypotheses regarding the construct, or by completing a factor analysis to ensure that you have the number of constructs that you say you have. In addition to ensuring the validity of instruments, the quantitative researcher needs to establish their reliability as well. Strategies for establishing reliability include Test retest. Correlates scores from two different administrations of the same test. Alternate forms. Correlates scores from administrations of two different forms of the same test. Split half reliability. Treats each half of one test or survey as a separate administration and correlates the results from each. Internal consistency. Uses Cronbach's coefficient alpha to calculate the average of all possible split halves. Quantitative research almost always relies on a sample that is intended to be representative of a larger population. There are two basic sampling strategies, random and non-random, and a number of specific strategies within each of these approaches. This table provides examples of each of the major strategies. The next section of this tutorial provides an overview of the procedures in conducting quantitative data analysis. There are specific procedures for conducting the data collection, preparing for and analyzing data, presenting the findings, and connecting to the body of existing research. This process ensures that the research is conducted as a systematic investigation that leads to credible results. Data comes in various sizes and shapes, and it is important to know about these so that the proper analysis can be used on the data. In 1946, S.S. Stevens first described the properties of measurement systems that allowed decisions about the type of measurement and about the attributes of objects that are preserved in numbers. These four types of data are referred to as nominal, ordinal, interval, and ratio. First, let's examine nominal data. With nominal data, there is no number value that indicates quantity. Instead, a number has been assigned to represent a certain attribute, like the number 1 to represent male and the number 2 to represent female. In other words, the number is just a label. You could also assign numbers to represent race, religion, or any other categorical information. Nominal data only denotes group membership. With ordinal data, there is again no indication of quantity. Rather, a number is assigned for ranking order. For example, satisfaction surveys often ask respondents to rank order their level of satisfaction with services or programs. The next level of measurement is interval data. With interval data, there are equal distances between two values, but there is no natural zero. A common example is the Fahrenheit temperature scale. Differences between the temperature measurements make sense, but ratios do not. For instance, 20 degrees Fahrenheit is not twice as hot as 10 degrees Fahrenheit. You can add and subtract interval level data, but they cannot be divided or multiplied. Finally, we have ratio data. Ratio is the same as interval, however ratios, means, averages, and other numerical formulas are all possible and make sense. Zero has a logical meaning, which shows the absence of, or having none of. Examples of ratio data are height, weight, speed, or any quantities based on a scale with a natural zero. In summary, nominal data can only be counted. Ordinal data can be counted and ranked. Interval data can also be added and subtracted, and ratio data can also be used in ratios and other calculations. Determining what type of data you have is one of the most important aspects of quantitative analysis. Depending on the research question, hypotheses, and research design, the researcher may choose to use descriptive and or inferential statistics to begin to analyze the data. Descriptive statistics are best illustrated when viewed through the lens of America's pastimes. Sports, weather, economy, stock market, and even our retirement portfolio are presented in a descriptive analysis. Basic terminology for descriptive statistics are terms that we are most familiar in this discipline. Frequency, mean, median, mode, range, variance, and standard deviation. Simply put, you are describing the data. Some of the most common graphic representations of data are bar graphs, pie graphs, histograms, and box and whisker graphs. Attempting to reach conclusions and make causal inferences beyond graphic representations or descriptive analyses is referred to as inferential statistics. In other words, examining the college enrollment of the past decade in a certain geographical region would assist in estimating what the enrollment for the next year might be. Frequently in education, the means of two or more groups are compared. When comparing means to assist in answering a research question, one can use a within-group, between-groups, or mixed-subject design. In a within-group design, the researcher compares measures of the same subjects across time, therefore within-group, or under different treatment conditions. This can also be referred to as a dependent-group design. The most basic example of this type of quasi-experimental design would be if a researcher conducted a pretest of a group of students, subjected them to a treatment, and then conducted a post-test. The group has been measured at different points in time. In a between-group design, subjects are assigned to one of the two or more groups. For example, Control, Treatment 1, Treatment 2. Ideally, the sampling and assignment to groups would be random, which would make this an experimental design. The researcher can then compare the means of the treatment group to the control group. When comparing two groups, the researcher can gain insight into the effects of the treatment. In a mixed-subjects design, the researcher is testing for significant differences between two or more independent groups while subjecting them to repeated measures. Choosing a statistical test to compare groups depends on the number of groups, whether the data are nominal, ordinal, or interval, and whether the data meet the assumptions for parametric tests. Nonparametric tests are typically used with nominal and ordinal data, while parametric tests use interval and ratio-level data. In addition to this, some further assumptions are made for parametric tests that the data are normally distributed in the population, that participant selection is independent, and the selection of one person does not determine the selection of another, and that the variances of the groups being compared are equal. The assumption of independent participant selection cannot be violated, but the others are more flexible. The t-test assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups, and especially appropriate as the method of analysis for a quasi-experimental design. When choosing a t-test, the assumptions are that the data are parametric. The analysis of variance, or ANOVA, assesses whether the means of more than two groups are statistically different from each other. When choosing an ANOVA, the assumptions are that the data are parametric. The chi-square test can be used when you have non-parametric data and want to compare differences between groups. The Kruskal-Wallis test can be used when there are more than two groups and the data are non-parametric. Correlation analysis is a set of statistical tests to determine whether there are linear relationships between two or more sets of variables from the same list of items or individuals, for example, achievement and performance of students. The tests provide a statistical yes or no as to whether a significant relationship or correlation exists between the variables. A correlation test consists of calculating a correlation coefficient between two variables. Again, there are parametric and non-parametric choices based on the assumptions of the data. Pearson R correlation is widely used in statistics to measure the strength of the relationship between linearly related variables. Spearman-Rank correlation is a non-parametric test that is used to measure the degree of association between two variables. Spearman-Rank correlation test does not assume any assumptions about the distribution. Spearman-Rank correlation test is used when the Pearson test gives misleading results. Often a Kendall-Taw is also included in this list of non-parametric correlation tests to examine the strength of the relationship if there are less than 20 rankings. Linear regression and correlation are similar and often confused. Sometimes your methodologist will encourage you to examine both the calculations. Calculate linear correlation if you measured both variables, x and y. Make sure to use the Pearson parametric correlation coefficient if you are certain you are not violating the test assumptions. Otherwise, choose the Spearman non-parametric correlation coefficient. If either variable has been manipulated using an intervention, do not calculate a correlation. While linear regression does indicate the nature of the relationship between two variables, like correlation, it can also be used to make predictions because one variable is considered explanatory while the other is considered a dependent variable. Establishing validity is a critical part of quantitative research. As with the nature of quantitative research, there is a defined approach or process for establishing validity. This also allows for the findings transferability. For a study to be valid, the evidence must support the interpretations of the data, the data must be accurate, and their use in drawing conclusions must be logical and appropriate. Construct validity concerns whether what you did for the program was what you wanted to do, or whether what you observed was what you wanted to observe. Construct validity concerns whether the operationalization of your variables are related to the theoretical concepts you are trying to measure. Are you actually measuring what you want to measure? Internal validity means that you have evidence that what you did in the study, i.e., the program, caused what you observed, i.e., the outcome, to happen. Conclusion validity is the degree to which conclusions drawn about relationships in the data are reasonable. External validity concerns the process of generalizing, or the degree to which the conclusions in your study would hold for other persons in other places and at other times. Establishing reliability and validity to your study is one of the most critical elements of the research process. Once you have decided to embark upon the process of conducting a quantitative study, use the following steps to get started. First, review research studies that have been conducted on your topic to determine what methods were used. Consider the strengths and weaknesses of the various data collection and analysis methods. Next, review the literature on quantitative research methods. Every aspect of your research has a body of literature associated with it. Just as you would not confine yourself to your course textbooks for your review of research on your topic, you should not limit yourself to your course texts for your review of methodological literature. Read broadly and deeply from the scholarly literature to gain expertise in quantitative research. Additional self-paced tutorials have been developed on different methodologies and techniques associated with quantitative research. Make sure that you complete all of the self-paced tutorials and review them as often as needed. You will then be prepared to complete a literature review of the specific methodologies and techniques that you will use in your study. Thank you for watching.
COMMENTS
A Practical Guide to Writing Quantitative and Qualitative ...
In The Literature Review: A Step-by-Step Guide for Students, Ridley presents that literature reviews serve several purposes (2008, p. 16-17). Included are the following points: Historical background for the research; Overview of current field provided by "contemporary debates, issues, and questions;" Theories and concepts related to your research;
What Is Quantitative Research? | Definition, Uses & Methods
Abstract. In an era of data-driven decision-making, a comprehensive understanding of quantitative research is indispensable. Current guides often provide fragmented insights, failing to offer a holistic view, while more comprehensive sources remain lengthy and less accessible, hindered by physical and proprietary barriers.
Literature review as a research methodology: An overview ...
Quantitative research methods are concerned with the planning, design, and implementation of strategies to collect and analyze data. Descartes, the seventeenth-century philosopher, suggested that how the results are achieved is often more important than the results themselves, as the journey taken along the research path is a journey of discovery. . High-quality quantitative research is ...
Quantitative approaches to literature represent elements or characteristics of literary texts numerically, applying the powerful, accurate, and widely accepted methods of mathematics to measurement, classification, and analysis.
How to Write a Literature Review | Guide, Examples, & ...
What Is Quantitative Research? | Definition & Methods - Scribbr
Quantitative research Quantitative methods allow us to learn about the world by quantifying some variation(s) in it. Example: how do suicide rates vary across demographic categories (Durkheim)? In order to learn about the world, we use inference: General definition: "Using facts you know to learn about facts you don't know" (Gary King)
(PDF) An Overview of Quantitative Research Methods
Quantitative research: an operational description. Purpose: explain, predict or control phenomena through focused collection and analysis of numberical data Approach: deductive; tries to be value-free/has objectives/ is outcome-oriented Hypotheses: Specific, testable, and stated prior to study. Lit. Review: extensive; may significantly influence a particular study
Approaching literature review for academic purposes
What is Quantitative Research? — updated 2024 | IxDF
Qualitative vs. Quantitative Research | Differences, ...
A practical guide to data analysis in general literature reviews
A systematic review follows explicit methodology to answer a well-defined research question by searching the literature comprehensively, evaluating the quantity and quality of research evidence rigorously, and analyzing the evidence to synthesize an answer to the research question. The evidence gathered in systematic reviews can be qualitative ...
Quantitative Research Excellence: Study Design and ...
(PDF) Quantitative Research Methods : A Synopsis Approach
Quantitative research involves collecting and analyzing numerical data to identify patterns, trends, and relationships among variables. This method is widely used in social sciences, psychology, economics, and other fields where researchers aim to understand human behavior and phenomena through statistical analysis. If you are looking for a quantitative research topic, there are numerous areas ...
Guidance on Conducting a Systematic Literature Review
Conducting and Writing Quantitative and Qualitative ...
An examination of the research methods and research designs employed suggests that on the quantitative side structured interview and questionnaire research within a cross-sectional design tends to ...
Synthesising quantitative and qualitative evidence to ...
First, review research studies that have been conducted on your topic to determine what methods were used. Consider the strengths and weaknesses of the various data collection and analysis methods. Next, review the literature on quantitative research methods. Every aspect of your research has a body of literature associated with it.