Mar 22, 2021 · This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology. Don't forget to also make full use of our research methods study notes and revision quizzes to support your studies and exam revision. Aim ... research terminologies in educational research. It provides definitions of many of the terms used in the guidebooks to conducting qualitative, quantitative, and mixed methods of research. The terms are arranged in alphabetical order. Abstract A brief summary of a research project and its findings. A summary of a study that ... 4 days ago · Colorado State University; Glossary A-Z. Education.com; Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Servive Technology. University of Southampton; Miller, Robert L. and Brewer, John D. The A-Z of Social Research: A Dictionary of Key Social Science Research Concepts London: SAGE, 2003; Jupp, Victor. ... Experimental Research A researcher working within this methodology creates an environment in which to observe and interpret the results of a research question. A key element in experimental research is that participants in a study are randomly assigned to groups. In an attempt to create a causal model (i.e., to discover the causal origin of ... A glossary of the key terms in the A LEVEL Research Methods Unit Learn with flashcards, games and more — for free. ... A table of results used in a chi squared test ... ... Research TErms 1 | P a g e Study Skills at NUA (2019): [email protected] STUDY SKILLS: GLOSSARY OF COMMON TERMS Abstract A paragraph of usually no more than 250 words that provides a summary to a research report. It should cover the topic, methods and results. Appendix (singular) or Appendices (plural) ... Frequency table A table is a systematic way of representing data so it is organised in rows and columns Histogram A type of graph where the frequency of each category of continuous data is represented by the height of the bar Normal distribution A symmetrical spread of frequency data that forms a bell-shaped curve. The mean, median ... This book contains over 1500 research and statistical terms, written in jargon-free, easy-to-understand terminology to help students understand difficult concepts in their research courses. This pocket guide is in an ideal supplement to the many discipline-specific texts on research methods and statistics. ... Dec 10, 2024 · This is a more recent term. (Used with human research.) Subject: another way to describe a participant. This is a more traditional term. (Used with human and animal research.) Attrition: loss of participants/subjects in a study. ... This resource has been designed to help students understand research methods terminology. Students will be able to learn definitions of research methods terms before moving on to testing their knowledge. This resource can be used as a useful testing tool after each section of the research methods content has been taught. ... ">

research methods key terms table

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

Research Methods Key Term Glossary

Last updated 22 Mar 2021

  • Share on Facebook
  • Share on Twitter
  • Share by Email

This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology.

Don't forget to also make full use of our research methods study notes and revision quizzes to support your studies and exam revision.

The researcher’s area of interest – what they are looking at (e.g. to investigate helping behaviour).

A graph that shows the data in the form of categories (e.g. behaviours observed) that the researcher wishes to compare.

Behavioural categories

Key behaviours or, collections of behaviour, that the researcher conducting the observation will pay attention to and record

In-depth investigation of a single person, group or event, where data are gathered from a variety of sources and by using several different methods (e.g. observations & interviews).

Closed questions

Questions where there are fixed choices of responses e.g. yes/no. They generate quantitative data

Co-variables

The variables investigated in a correlation

Concurrent validity

Comparing a new test with another test of the same thing to see if they produce similar results. If they do then the new test has concurrent validity

Confidentiality

Unless agreed beforehand, participants have the right to expect that all data collected during a research study will remain confidential and anonymous.

Confounding variable

An extraneous variable that varies systematically with the IV so we cannot be sure of the true source of the change to the DV

Content analysis

Technique used to analyse qualitative data which involves coding the written data into categories – converting qualitative data into quantitative data.

Control group

A group that is treated normally and gives us a measure of how people behave when they are not exposed to the experimental treatment (e.g. allowed to sleep normally).

Controlled observation

An observation study where the researchers control some variables - often takes place in laboratory setting

Correlational analysis

A mathematical technique where the researcher looks to see whether scores for two covariables are related

Counterbalancing

A way of trying to control for order effects in a repeated measures design, e.g. half the participants do condition A followed by B and the other half do B followed by A

Covert observation

Also known as an undisclosed observation as the participants do not know their behaviour is being observed

Critical value

The value that a test statistic must reach in order for the hypothesis to be accepted.

After completing the research, the true aim is revealed to the participant. Aim of debriefing = to return the person to the state s/he was in before they took part.

Involves misleading participants about the purpose of s study.

Demand characteristics

Occur when participants try to make sense of the research situation they are in and try to guess the purpose of the research or try to present themselves in a good way.

Dependent variable

The variable that is measured to tell you the outcome.

Descriptive statistics

Analysis of data that helps describe, show or summarize data in a meaningful way

Directional hypothesis

A one-tailed hypothesis that states the direction of the difference or relationship (e.g. boys are more helpful than girls).

Dispersion measure

A dispersion measure shows how a set of data is spread out, examples are the range and the standard deviation

Double blind control

Participants are not told the true purpose of the research and the experimenter is also blind to at least some aspects of the research design.

Ecological validity

The extent to which the findings of a research study are able to be generalized to real-life settings

Ethical guidelines

These are provided by the BPS - they are the ‘rules’ by which all psychologists should operate, including those carrying out research.

Ethical issues

There are 3 main ethical issues that occur in psychological research – deception, lack of informed consent and lack of protection of participants.

Evaluation apprehension

Participants’ behaviour is distorted as they fear being judged by observers

Event sampling

A target behaviour is identified and the observer records it every time it occurs

Experimental group

The group that received the experimental treatment (e.g. sleep deprivation)

External validity

Whether it is possible to generalise the results beyond the experimental setting.

Extraneous variable

Variables that if not controlled may affect the DV and provide a false impression than an IV has produced changes when it hasn’t.

Face validity

Simple way of assessing whether a test measures what it claims to measure which is concerned with face value – e.g. does an IQ test look like it tests intelligence.

Field experiment

An experiment that takes place in a natural setting where the experimenter manipulates the IV and measures the DV

A graph that is used for continuous data (e.g. test scores). There should be no space between the bars, because the data is continuous.

This is a formal statement or prediction of what the researcher expects to find. It needs to be testable.

Independent groups design

An experimental design where each participants only takes part in one condition of the IV

Independent variable

The variable that the experimenter manipulates (changes).

Inferential statistics

Inferential statistics are ways of analyzing data using statistical tests that allow the researcher to make conclusions about whether a hypothesis was supported by the results.

Informed consent

Psychologists should ensure that all participants are helped to understand fully all aspects of the research before they agree (give consent) to take part

Inter-observer reliability

The extent to which two or more observers are observing and recording behaviour in the same way

Internal validity

In relation to experiments, whether the results were due to the manipulation of the IV rather than other factors such as extraneous variables or demand characteristics.

Interval level data

Data measured in fixed units with equal distance between points on the scale

Investigator effects

These result from the effects of a researcher’s behaviour and characteristics on an investigation.

Laboratory experiment

An experiment that takes place in a controlled environment where the experimenter manipulates the IV and measures the DV

Matched pairs design

An experimental design where pairs of participants are matched on important characteristics and one member allocated to each condition of the IV

Measure of central tendency calculated by adding all the scores in a set of data together and dividing by the total number of scores

Measures of central tendency

A measurement of data that indicates where the middle of the information lies e.g. mean, median or mode

Measure of central tendency calculated by arranging scores in a set of data from lowest to highest and finding the middle score

Meta-analysis

A technique where rather than conducting new research with participants, the researchers examine the results of several studies that have already been conducted

Measure of central tendency which is the most frequently occurring score in a set of data

Natural experiment

An experiment where the change in the IV already exists rather than being manipulated by the experimenter

Naturalistic observation

An observation study conducted in the environment where the behaviour would normally occur

Negative correlation

A relationship exists between two covariables where as one increases, the other decreases

Nominal level data

Frequency count data that consists of the number of participants falling into categories. (e.g. 7 people passed their driving test first time, 6 didn’t).

Non-directional hypothesis

A two-tailed hypothesis that does not predict the direction of the difference or relationship (e.g. girls and boys are different in terms of helpfulness).

Normal distribution

An arrangement of a data that is symmetrical and forms a bell shaped pattern where the mean, median and mode all fall in the centre at the highest peak

Observed value

The value that you have obtained from conducting your statistical test

Observer bias

Occurs when the observers know the aims of the study study or the hypotheses and allow this knowledge to influence their observations

Open questions

Questions where there is no fixed response and participants can give any answer they like. They generate qualitative data.

Operationalising variables

This means clearly describing the variables (IV and DV) in terms of how they will be manipulated (IV) or measured (DV).

Opportunity sample

A sampling technique where participants are chosen because they are easily available

Order effects

Order effects can occur in a repeated measures design and refers to how the positioning of tasks influences the outcome e.g. practice effect or boredom effect on second task

Ordinal level data

Data that is capable of being out into rank order (e.g. places in a beauty contest, or ratings for attractiveness).

Overt observation

Also known as a disclosed observation as the participants given their permission for their behaviour to be observed

Participant observation

Observation study where the researcher actually joins the group or takes part in the situation they are observing.

Peer review

Before going to publication, a research report is sent other psychologists who are knowledgeable in the research topic for them to review the study, and check for any problems

Pilot study

A small scale study conducted to ensure the method will work according to plan. If it doesn’t then amendments can be made.

Positive correlation

A relationship exists between two covariables where as one increases, so does the other

Presumptive consent

Asking a group of people from the same target population as the sample whether they would agree to take part in such a study, if yes then presume the sample would

Primary data

Information that the researcher has collected him/herself for a specific purpose e.g. data from an experiment or observation

Prior general consent

Before participants are recruited they are asked whether they are prepared to take part in research where they might be deceived about the true purpose

Probability

How likely something is to happen – can be expressed as a number (0.5) or a percentage (50% change of tossing coin and getting a head)

Protection of participants

Participants should be protected from physical or mental health, including stress - risk of harm must be no greater than that to which they are exposed in everyday life

Qualitative data

Descriptive information that is expressed in words

Quantitative data

Information that can be measured and written down with numbers.

Quasi experiment

An experiment often conducted in controlled conditions where the IV simply exists so there can be no random allocation to the conditions

Questionnaire

A set of written questions that participants fill in themselves

Random sampling

A sampling technique where everyone in the target population has an equal chance of being selected

Randomisation

Refers to the practice of using chance methods (e.g. flipping a coin' to allocate participants to the conditions of an investigation

The distance between the lowest and the highest value in a set of scores.

A measure of dispersion which involves subtracting the lowest score from the highest score in a set of data

Reliability

Whether something is consistent. In the case of a study, whether it is replicable.

Repeated measures design

An experimental design where each participants takes part in both/all conditions of the IV

Representative sample

A sample that that closely matched the target population as a whole in terms of key variables and characteristics

Retrospective consent

Once the true nature of the research has been revealed, participants should be given the right to withdraw their data if they are not happy.

Right to withdraw

Participants should be aware that they can leave the study at any time, even if they have been paid to take part.

A group of people that are drawn from the target population to take part in a research investigation

Scattergram

Used to plot correlations where each pair of values is plotted against each other to see if there is a relationship between them.

Secondary data

Information that someone else has collected e.g. the work of other psychologists or government statistics

Semi-structured interview

Interview that has some pre-determined questions, but the interviewer can develop others in response to answers given by the participant

A statistical test used to analyse the direction of differences of scores between the same or matched pairs of subjects under two experimental conditions

Significance

If the result of a statistical test is significant it is highly unlikely to have occurred by chance

Single-blind control

Participants are not told the true purpose of the research

Skewed distribution

An arrangement of data that is not symmetrical as data is clustered ro one end of the distribution

Social desirability bias

Participants’ behaviour is distorted as they modify this in order to be seen in a positive light.

Standard deviation

A measure of the average spread of scores around the mean. The greater the standard deviation the more spread out the scores are. .

Standardised instructions

The instructions given to each participant are kept identical – to help prevent experimenter bias.

Standardised procedures

In every step of the research all the participants are treated in exactly the same way and so all have the same experience.

Stratified sample

A sampling technique where groups of participants are selected in proportion to their frequency in the target population

Structured interview

Interview where the questions are fixed and the interviewer reads them out and records the responses

Structured observation

An observation study using predetermined coding scheme to record the participants' behaviour

Systematic sample

A sampling technique where every nth person in a list of the target population is selected

Target population

The group that the researchers draws the sample from and wants to be able to generalise the findings to

Temporal validity

Refers to how likely it is that the time period when a study was conducted has influenced the findings and whether they can be generalised to other periods in time

Test-retest reliability

Involves presenting the same participants with the same test or questionnaire on two separate occasions and seeing whether there is a positive correlation between the two

Thematic analysis

A method for analysing qualitative data which involves identifying, analysing and reporting patterns within the data

Time sampling

A way of sampling the behaviour that is being observed by recording what happens in a series of fixed time intervals.

Type 1 error

Is a false positive. It is where you accept the alternative/experimental hypothesis when it is false

Type 2 error

Is a false negative. It is where you accept the null hypothesis when it is false

Unstructured interview

Also know as a clinical interview, there are no fixed questions just general aims and it is more like a conversation

Unstructured observation

Observation where there is no checklist so every behaviour seen is written down in an much detail as possible

Whether something is true – measures what it sets out to measure.

Volunteer sample

A sampling technique where participants put themselves forward to take part in research, often by answering an advertisement

You might also like

Explanations for conformity, cultural variations in attachment, emergence of psychology as a science: the laboratory experiment, scoville and milner (1957), kohlberg (1968), schizophrenia: what is schizophrenia, biopsychology: the pns – somatic and autonomic nervous systems, relationships: duck's phase model of relationship breakdown, our subjects.

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

Glossary of research terms.

  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Resources
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

This glossary is intended to assist you in understanding commonly used terms and concepts when reading, interpreting, and evaluating scholarly research. Also included are common words and phrases defined within the context of how they apply to research in the social and behavioral sciences. Definitions have been adapted from the sources cited below.

  • Acculturation -- refers to the process of adapting to another culture, particularly in reference to blending in with the majority population [e.g., an immigrant adopting American customs]. However, acculturation also implies that both cultures add something to one another, but still remain distinct groups unto themselves.
  • Accuracy -- a term used in survey research to refer to the match between the target population and the sample.
  • Affective Measures -- procedures or devices used to obtain quantified descriptions of an individual's feelings, emotional states, or dispositions.
  • Aggregate -- a total created from smaller units. For instance, the population of a county is an aggregate of the populations of the cities, rural areas, etc. that comprise the county. As a verb, it refers to total data from smaller units into a large unit.
  • Anonymity -- a research condition in which no one, including the researcher, knows the identities of research participants.
  • Baseline -- a control measurement carried out before an experimental treatment.
  • Behaviorism -- school of psychological thought concerned with the observable, tangible, objective facts of behavior, rather than with subjective phenomena such as thoughts, emotions, or impulses. Contemporary behaviorism also emphasizes the study of mental states such as feelings and fantasies to the extent that they can be directly observed and measured.
  • Beliefs -- ideas, doctrines, tenets, etc. that are accepted as true on grounds which are not immediately susceptible to rigorous proof.
  • Benchmarking -- systematically measuring and comparing the operations and outcomes of organizations, systems, processes, etc., against agreed upon "best-in-class" frames of reference.
  • Bias -- a loss of balance and accuracy in the use of research methods. It can appear in research via the sampling frame, random sampling, or non-response. It can also occur at other stages in research, such as while interviewing, in the design of questions, or in the way data are analyzed and presented. Bias means that the research findings will not be representative of, or generalizable to, a wider population.
  • Case Study -- the collection and presentation of detailed information about a particular participant or small group, frequently including data derived from the subjects themselves.
  • Causal Hypothesis -- a statement hypothesizing that the independent variable affects the dependent variable in some way.
  • Causal Relationship -- the relationship established that shows that an independent variable, and nothing else, causes a change in a dependent variable. It also establishes how much of a change is shown in the dependent variable.
  • Causality -- the relation between cause and effect.
  • Central Tendency -- any way of describing or characterizing typical, average, or common values in some distribution.
  • Chi-square Analysis -- a common non-parametric statistical test which compares an expected proportion or ratio to an actual proportion or ratio.
  • Claim -- a statement, similar to a hypothesis, which is made in response to the research question and that is affirmed with evidence based on research.
  • Classification -- ordering of related phenomena into categories, groups, or systems according to characteristics or attributes.
  • Cluster Analysis -- a method of statistical analysis where data that share a common trait are grouped together. The data is collected in a way that allows the data collector to group data according to certain characteristics.
  • Cohort Analysis -- group by group analytic treatment of individuals having a statistical factor in common to each group. Group members share a particular characteristic [e.g., born in a given year] or a common experience [e.g., entering a college at a given time].
  • Confidentiality -- a research condition in which no one except the researcher(s) knows the identities of the participants in a study. It refers to the treatment of information that a participant has disclosed to the researcher in a relationship of trust and with the expectation that it will not be revealed to others in ways that violate the original consent agreement, unless permission is granted by the participant.
  • Confirmability Objectivity -- the findings of the study could be confirmed by another person conducting the same study.
  • Construct -- refers to any of the following: something that exists theoretically but is not directly observable; a concept developed [constructed] for describing relations among phenomena or for other research purposes; or, a theoretical definition in which concepts are defined in terms of other concepts. For example, intelligence cannot be directly observed or measured; it is a construct.
  • Construct Validity -- seeks an agreement between a theoretical concept and a specific measuring device, such as observation.
  • Constructivism -- the idea that reality is socially constructed. It is the view that reality cannot be understood outside of the way humans interact and that the idea that knowledge is constructed, not discovered. Constructivists believe that learning is more active and self-directed than either behaviorism or cognitive theory would postulate.
  • Content Analysis -- the systematic, objective, and quantitative description of the manifest or latent content of print or nonprint communications.
  • Context Sensitivity -- awareness by a qualitative researcher of factors such as values and beliefs that influence cultural behaviors.
  • Control Group -- the group in an experimental design that receives either no treatment or a different treatment from the experimental group. This group can thus be compared to the experimental group.
  • Controlled Experiment -- an experimental design with two or more randomly selected groups [an experimental group and control group] in which the researcher controls or introduces the independent variable and measures the dependent variable at least two times [pre- and post-test measurements].
  • Correlation -- a common statistical analysis, usually abbreviated as r, that measures the degree of relationship between pairs of interval variables in a sample. The range of correlation is from -1.00 to zero to +1.00. Also, a non-cause and effect relationship between two variables.
  • Covariate -- a product of the correlation of two related variables times their standard deviations. Used in true experiments to measure the difference of treatment between them.
  • Credibility -- a researcher's ability to demonstrate that the object of a study is accurately identified and described based on the way in which the study was conducted.
  • Critical Theory -- an evaluative approach to social science research, associated with Germany's neo-Marxist “Frankfurt School,” that aims to criticize as well as analyze society, opposing the political orthodoxy of modern communism. Its goal is to promote human emancipatory forces and to expose ideas and systems that impede them.
  • Data -- factual information [as measurements or statistics] used as a basis for reasoning, discussion, or calculation.
  • Data Mining -- the process of analyzing data from different perspectives and summarizing it into useful information, often to discover patterns and/or systematic relationships among variables.
  • Data Quality -- this is the degree to which the collected data [results of measurement or observation] meet the standards of quality to be considered valid [trustworthy] and  reliable [dependable].
  • Deductive -- a form of reasoning in which conclusions are formulated about particulars from general or universal premises.
  • Dependability -- being able to account for changes in the design of the study and the changing conditions surrounding what was studied.
  • Dependent Variable -- a variable that varies due, at least in part, to the impact of the independent variable. In other words, its value “depends” on the value of the independent variable. For example, in the variables “gender” and “academic major,” academic major is the dependent variable, meaning that your major cannot determine whether you are male or female, but your gender might indirectly lead you to favor one major over another.
  • Deviation -- the distance between the mean and a particular data point in a given distribution.
  • Discourse Community -- a community of scholars and researchers in a given field who respond to and communicate to each other through published articles in the community's journals and presentations at conventions. All members of the discourse community adhere to certain conventions for the presentation of their theories and research.
  • Discrete Variable -- a variable that is measured solely in whole units, such as, gender and number of siblings.
  • Distribution -- the range of values of a particular variable.
  • Effect Size -- the amount of change in a dependent variable that can be attributed to manipulations of the independent variable. A large effect size exists when the value of the dependent variable is strongly influenced by the independent variable. It is the mean difference on a variable between experimental and control groups divided by the standard deviation on that variable of the pooled groups or of the control group alone.
  • Emancipatory Research -- research is conducted on and with people from marginalized groups or communities. It is led by a researcher or research team who is either an indigenous or external insider; is interpreted within intellectual frameworks of that group; and, is conducted largely for the purpose of empowering members of that community and improving services for them. It also engages members of the community as co-constructors or validators of knowledge.
  • Empirical Research -- the process of developing systematized knowledge gained from observations that are formulated to support insights and generalizations about the phenomena being researched.
  • Epistemology -- concerns knowledge construction; asks what constitutes knowledge and how knowledge is validated.
  • Ethnography -- method to study groups and/or cultures over a period of time. The goal of this type of research is to comprehend the particular group/culture through immersion into the culture or group. Research is completed through various methods but, since the researcher is immersed within the group for an extended period of time, more detailed information is usually collected during the research.
  • Expectancy Effect -- any unconscious or conscious cues that convey to the participant in a study how the researcher wants them to respond. Expecting someone to behave in a particular way has been shown to promote the expected behavior. Expectancy effects can be minimized by using standardized interactions with subjects, automated data-gathering methods, and double blind protocols.
  • External Validity -- the extent to which the results of a study are generalizable or transferable.
  • Factor Analysis -- a statistical test that explores relationships among data. The test explores which variables in a data set are most related to each other. In a carefully constructed survey, for example, factor analysis can yield information on patterns of responses, not simply data on a single response. Larger tendencies may then be interpreted, indicating behavior trends rather than simply responses to specific questions.
  • Field Studies -- academic or other investigative studies undertaken in a natural setting, rather than in laboratories, classrooms, or other structured environments.
  • Focus Groups -- small, roundtable discussion groups charged with examining specific topics or problems, including possible options or solutions. Focus groups usually consist of 4-12 participants, guided by moderators to keep the discussion flowing and to collect and report the results.
  • Framework -- the structure and support that may be used as both the launching point and the on-going guidelines for investigating a research problem.
  • Generalizability -- the extent to which research findings and conclusions conducted on a specific study to groups or situations can be applied to the population at large.
  • Grey Literature -- research produced by organizations outside of commercial and academic publishing that publish materials, such as, working papers, research reports, and briefing papers.
  • Grounded Theory -- practice of developing other theories that emerge from observing a group. Theories are grounded in the group's observable experiences, but researchers add their own insight into why those experiences exist.
  • Group Behavior -- behaviors of a group as a whole, as well as the behavior of an individual as influenced by his or her membership in a group.
  • Hypothesis -- a tentative explanation based on theory to predict a causal relationship between variables.
  • Independent Variable -- the conditions of an experiment that are systematically manipulated by the researcher. A variable that is not impacted by the dependent variable, and that itself impacts the dependent variable. In the earlier example of "gender" and "academic major," (see Dependent Variable) gender is the independent variable.
  • Individualism -- a theory or policy having primary regard for the liberty, rights, or independent actions of individuals.
  • Inductive -- a form of reasoning in which a generalized conclusion is formulated from particular instances.
  • Inductive Analysis -- a form of analysis based on inductive reasoning; a researcher using inductive analysis starts with answers, but formulates questions throughout the research process.
  • Insiderness -- a concept in qualitative research that refers to the degree to which a researcher has access to and an understanding of persons, places, or things within a group or community based on being a member of that group or community.
  • Internal Consistency -- the extent to which all questions or items assess the same characteristic, skill, or quality.
  • Internal Validity -- the rigor with which the study was conducted [e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and was not measured]. It is also the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore. In studies that do not explore causal relationships, only the first of these definitions should be considered when assessing internal validity.
  • Life History -- a record of an event/events in a respondent's life told [written down, but increasingly audio or video recorded] by the respondent from his/her own perspective in his/her own words. A life history is different from a "research story" in that it covers a longer time span, perhaps a complete life, or a significant period in a life.
  • Margin of Error -- the permittable or acceptable deviation from the target or a specific value. The allowance for slight error or miscalculation or changing circumstances in a study.
  • Measurement -- process of obtaining a numerical description of the extent to which persons, organizations, or things possess specified characteristics.
  • Meta-Analysis -- an analysis combining the results of several studies that address a set of related hypotheses.
  • Methodology -- a theory or analysis of how research does and should proceed.
  • Methods -- systematic approaches to the conduct of an operation or process. It includes steps of procedure, application of techniques, systems of reasoning or analysis, and the modes of inquiry employed by a discipline.
  • Mixed-Methods -- a research approach that uses two or more methods from both the quantitative and qualitative research categories. It is also referred to as blended methods, combined methods, or methodological triangulation.
  • Modeling -- the creation of a physical or computer analogy to understand a particular phenomenon. Modeling helps in estimating the relative magnitude of various factors involved in a phenomenon. A successful model can be shown to account for unexpected behavior that has been observed, to predict certain behaviors, which can then be tested experimentally, and to demonstrate that a given theory cannot account for certain phenomenon.
  • Models -- representations of objects, principles, processes, or ideas often used for imitation or emulation.
  • Naturalistic Observation -- observation of behaviors and events in natural settings without experimental manipulation or other forms of interference.
  • Norm -- the norm in statistics is the average or usual performance. For example, students usually complete their high school graduation requirements when they are 18 years old. Even though some students graduate when they are younger or older, the norm is that any given student will graduate when he or she is 18 years old.
  • Null Hypothesis -- the proposition, to be tested statistically, that the experimental intervention has "no effect," meaning that the treatment and control groups will not differ as a result of the intervention. Investigators usually hope that the data will demonstrate some effect from the intervention, thus allowing the investigator to reject the null hypothesis.
  • Ontology -- a discipline of philosophy that explores the science of what is, the kinds and structures of objects, properties, events, processes, and relations in every area of reality.
  • Panel Study -- a longitudinal study in which a group of individuals is interviewed at intervals over a period of time.
  • Participant -- individuals whose physiological and/or behavioral characteristics and responses are the object of study in a research project.
  • Peer-Review -- the process in which the author of a book, article, or other type of publication submits his or her work to experts in the field for critical evaluation, usually prior to publication. This is standard procedure in publishing scholarly research.
  • Phenomenology -- a qualitative research approach concerned with understanding certain group behaviors from that group's point of view.
  • Philosophy -- critical examination of the grounds for fundamental beliefs and analysis of the basic concepts, doctrines, or practices that express such beliefs.
  • Phonology -- the study of the ways in which speech sounds form systems and patterns in language.
  • Policy -- governing principles that serve as guidelines or rules for decision making and action in a given area.
  • Policy Analysis -- systematic study of the nature, rationale, cost, impact, effectiveness, implications, etc., of existing or alternative policies, using the theories and methodologies of relevant social science disciplines.
  • Population -- the target group under investigation. The population is the entire set under consideration. Samples are drawn from populations.
  • Position Papers -- statements of official or organizational viewpoints, often recommending a particular course of action or response to a situation.
  • Positivism -- a doctrine in the philosophy of science, positivism argues that science can only deal with observable entities known directly to experience. The positivist aims to construct general laws, or theories, which express relationships between phenomena. Observation and experiment is used to show whether the phenomena fit the theory.
  • Predictive Measurement -- use of tests, inventories, or other measures to determine or estimate future events, conditions, outcomes, or trends.
  • Principal Investigator -- the scientist or scholar with primary responsibility for the design and conduct of a research project.
  • Probability -- the chance that a phenomenon will occur randomly. As a statistical measure, it is shown as p [the "p" factor].
  • Questionnaire -- structured sets of questions on specified subjects that are used to gather information, attitudes, or opinions.
  • Random Sampling -- a process used in research to draw a sample of a population strictly by chance, yielding no discernible pattern beyond chance. Random sampling can be accomplished by first numbering the population, then selecting the sample according to a table of random numbers or using a random-number computer generator. The sample is said to be random because there is no regular or discernible pattern or order. Random sample selection is used under the assumption that sufficiently large samples assigned randomly will exhibit a distribution comparable to that of the population from which the sample is drawn. The random assignment of participants increases the probability that differences observed between participant groups are the result of the experimental intervention.
  • Reliability -- the degree to which a measure yields consistent results. If the measuring instrument [e.g., survey] is reliable, then administering it to similar groups would yield similar results. Reliability is a prerequisite for validity. An unreliable indicator cannot produce trustworthy results.
  • Representative Sample -- sample in which the participants closely match the characteristics of the population, and thus, all segments of the population are represented in the sample. A representative sample allows results to be generalized from the sample to the population.
  • Rigor -- degree to which research methods are scrupulously and meticulously carried out in order to recognize important influences occurring in an experimental study.
  • Sample -- the population researched in a particular study. Usually, attempts are made to select a "sample population" that is considered representative of groups of people to whom results will be generalized or transferred. In studies that use inferential statistics to analyze results or which are designed to be generalizable, sample size is critical, generally the larger the number in the sample, the higher the likelihood of a representative distribution of the population.
  • Sampling Error -- the degree to which the results from the sample deviate from those that would be obtained from the entire population, because of random error in the selection of respondent and the corresponding reduction in reliability.
  • Saturation -- a situation in which data analysis begins to reveal repetition and redundancy and when new data tend to confirm existing findings rather than expand upon them.
  • Semantics -- the relationship between symbols and meaning in a linguistic system. Also, the cuing system that connects what is written in the text to what is stored in the reader's prior knowledge.
  • Social Theories -- theories about the structure, organization, and functioning of human societies.
  • Sociolinguistics -- the study of language in society and, more specifically, the study of language varieties, their functions, and their speakers.
  • Standard Deviation -- a measure of variation that indicates the typical distance between the scores of a distribution and the mean; it is determined by taking the square root of the average of the squared deviations in a given distribution. It can be used to indicate the proportion of data within certain ranges of scale values when the distribution conforms closely to the normal curve.
  • Statistical Analysis -- application of statistical processes and theory to the compilation, presentation, discussion, and interpretation of numerical data.
  • Statistical Bias -- characteristics of an experimental or sampling design, or the mathematical treatment of data, that systematically affects the results of a study so as to produce incorrect, unjustified, or inappropriate inferences or conclusions.
  • Statistical Significance -- the probability that the difference between the outcomes of the control and experimental group are great enough that it is unlikely due solely to chance. The probability that the null hypothesis can be rejected at a predetermined significance level [0.05 or 0.01].
  • Statistical Tests -- researchers use statistical tests to make quantitative decisions about whether a study's data indicate a significant effect from the intervention and allow the researcher to reject the null hypothesis. That is, statistical tests show whether the differences between the outcomes of the control and experimental groups are great enough to be statistically significant. If differences are found to be statistically significant, it means that the probability [likelihood] that these differences occurred solely due to chance is relatively low. Most researchers agree that a significance value of .05 or less [i.e., there is a 95% probability that the differences are real] sufficiently determines significance.
  • Subcultures -- ethnic, regional, economic, or social groups exhibiting characteristic patterns of behavior sufficient to distinguish them from the larger society to which they belong.
  • Testing -- the act of gathering and processing information about individuals' ability, skill, understanding, or knowledge under controlled conditions.
  • Theory -- a general explanation about a specific behavior or set of events that is based on known principles and serves to organize related events in a meaningful way. A theory is not as specific as a hypothesis.
  • Treatment -- the stimulus given to a dependent variable.
  • Trend Samples -- method of sampling different groups of people at different points in time from the same population.
  • Triangulation -- a multi-method or pluralistic approach, using different methods in order to focus on the research topic from different viewpoints and to produce a multi-faceted set of data. Also used to check the validity of findings from any one method.
  • Unit of Analysis -- the basic observable entity or phenomenon being analyzed by a study and for which data are collected in the form of variables.
  • Validity -- the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. A method can be reliable, consistently measuring the same thing, but not valid.
  • Variable -- any characteristic or trait that can vary from one person to another [race, gender, academic major] or for one person over time [age, political beliefs].
  • Weighted Scores -- scores in which the components are modified by different multipliers to reflect their relative importance.
  • White Paper -- an authoritative report that often states the position or philosophy about a social, political, or other subject, or a general explanation of an architecture, framework, or product technology written by a group of researchers. A white paper seeks to contain unbiased information and analysis regarding a business or policy problem that the researchers may be facing.

Elliot, Mark, Fairweather, Ian, Olsen, Wendy Kay, and Pampaka, Maria. A Dictionary of Social Research Methods. Oxford, UK: Oxford University Press, 2016; Free Social Science Dictionary. Socialsciencedictionary.com [2008]. Glossary. Institutional Review Board. Colorado College; Glossary of Key Terms. Writing@CSU. Colorado State University; Glossary A-Z. Education.com; Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Servive Technology. University of Southampton; Miller, Robert L. and Brewer, John D. The A-Z of Social Research: A Dictionary of Key Social Science Research Concepts London: SAGE, 2003; Jupp, Victor. The SAGE Dictionary of Social and Cultural Research Methods . London: Sage, 2006.

  • << Previous: Independent and Dependent Variables
  • Next: 1. Choosing a Research Problem >>
  • Last Updated: Dec 19, 2024 2:30 PM
  • URL: https://libguides.usc.edu/writingguide

No internet connection.

All search filters on the page have been cleared., your search has been saved..

  • Sign in to my profile My Profile

Not Logged In

This book contains over 1500 research and statistical terms, written in jargon-free, easy-to-understand terminology to help students understand difficult concepts in their research courses. This pocket guide is in an ideal supplement to the many discipline-specific texts on research methods and statistics.

Glossary of Research Terms

  • By: Michael J. Holosko & Bruce A. Thyer
  • In: Pocket Glossary for Commonly Used Research Terms
  • Chapter DOI: https:// doi. org/10.4135/9781452269917.n1
  • Subject: Anthropology , Business and Management , Criminology and Criminal Justice , Communication and Media Studies , Counseling and Psychotherapy , Economics , Education , Geography , Health , History , Marketing , Nursing , Political Science and International Relations , Psychology , Social Policy and Public Policy , Social Work , Sociology , Science , Technology , Computer Science , Engineering , Mathematics , Medicine
  • Keywords: administration ; attitudes ; decision making ; emotion ; errors ; evidence-based medicine ; information use ; inquiry ; instruments ; knowledge ; organizations ; outcomes ; persons ; placebo effect ; population ; prejudice ; risk ; scale ; scales of measurement ; surveying ; threats ; trials ; types of study
  • Show page numbers Hide page numbers
  • A posteriori logic : A Latin phrase literally meaning “from that which comes after.” Proving things from observations, experiences, research, evidence, or arguing from the effect to the cause.
  • AB design : A single-participant time series design in which repeated measurements are made until stability is presumably established, called a baseline (A), after which an intervention is introduced (B), and an appropriate number of measurements are made to gauge the effectiveness of the intervention. This type of design cannot usually establish causality.
  • ABA design : This is the same as the AB descriptive design, except a second baseline phase (A) is added. This may allow stronger causal inferences than the AB design.
  • ABAB design : This is the same as the AB descriptive design, except that second baseline (A) and treatment (B) phases are added. Repeated changes in outcomes following changes in intervention make a greater case for causality.
  • ABC design : A single-participant time series design in which measurements are made until the stability of the baseline [Page 4] (A) is established. Then, an intervention (B) is introduced, the results are measured, and finally, a second intervention (C) is added, and the results are then measured.
  • ABCB design : This is the same as the ABAB design, except that the second baseline phase is replaced by a modified treatment phase (C).
  • ABCD design : A single-participant research design intended to create or test hypotheses in which alternate treatments are used. This helps a researcher to understand how different treatments (B, C, and D) influence the client.
  • Abscissa : This is the horizontal line, or x-axis, on a graph.
  • Absolute benefit increase (ABI) : The absolute arithmetic difference in rates of good outcomes between experimental (EER) and control participants (CER) in a trial, calculated as EER – CER, and accompanied by a 95% confidence interval (CI).
  • Absolute frequency distribution : A list of the values that a variable takes in a data set. It is usually a list ordered by quantity. It will show the number of times each value appears. The absolute frequency is the total number of occurrences of one variable.
  • Absolute risk increase (ARI) : The absolute arithmetic difference in rates of bad outcomes between experimental (EER) and control participants (CER) in a trial, calculated as EER – CER, and accompanied by a 95% confidence interval (CI).
  • Absolute risk reduction (ARR) : The difference between the control group's event rate, or the proportion of participants responding to the placebo or other control treatments, and the experimental event rate, or the proportion of participants responding to the experimental treatment.
  • Abstract : A summary of a published article found on the first page beneath the title, which describes the article's most [Page 5] important aspects, including its purpose, methods used, major results, and conclusions.
  • Access : The ability to gain entry to a database, population, or participants for study.
  • Accessible population : The group of persons from which the researcher can realistically select participants for a study sample and from which the researcher may generalize findings.
  • Accidental sampling : This involves the sample being selected from that part of the population that is easy to access. That is, a sample is selected because it is readily available and convenient. This is a nonprobability-based sample, also known as a convenience sample.
  • Accountability : Responsibility to a person, organization, or authority for any research activity.
  • Accuracy : The degree of precision of a measured or calculated quantity to its actual value.
  • Achievement test : An instrument or scale used to measure the proficiency levels of individuals in given areas of knowledge or skill.
  • Acquiescence bias : A category of response bias in which respondents to a survey have a tendency to agree with all the questions or to indicate a positive reaction to them.
  • Acronyms : Abbreviations that are formed using the initial components in a phrase or name: for example, UN, APA, USDA.
  • Action or participatory research : Identifies a social problem or concern and seeks information about it by planned collaborations with individuals or organizations. This is one of the main methods of qualitative research.
  • Age : The length of time that an organism has lived. It is often used as a demographic variable in research studies.
  • Age-equivalent score : A score indicating the age level for which a particular performance range is typical.
  • Allocation concealment : This occurs when the person who is enrolling a participant into a clinical trial is unaware, or blinded, to whether the next participant to be enrolled will be put in the intervention or control group. This occurs when assigning individuals to groups to conduct experiments.
  • Alternate-forms method : A way of assessing reliability, or the degree to which a test produces the same results over time under similar conditions. This method is done by giving two forms of a test that are as similar as possible to research participants. The scores of the two tests are correlated to yield a coefficient of equivalence, with a high coefficient of equivalence indicating the overall test is reliable in that most or all of the items seem to be assessing the same characteristics, and a low coefficient indicating that the two tests are not assessing the same characteristic. This is a test that standardizes a measuring instrument.
  • Alternate-forms reliability : The degree to which a test produces the same results over time under similar conditions as measured by the alternate-forms method. This is a test that standardizes a measuring instrument.
  • Ambiguity : The property of a word, term, notation, sign, symbol, phrase, sentence, or any other form used for communication that can be interpreted in more than one way.
  • Ambiguity, direction of causal influence : This occurs when it is not clear what is causing what to happen or whether the cause precedes the effect. This is sometimes called the [Page 7] chicken or egg question. This is a threat to the internal validity of the findings of a study.
  • Ambiguous independent variable : This occurs when the independent variable is not clearly and operationally defined so that the study cannot be replicated exactly. This is a threat to the external validity of a study's findings.
  • American Psychological Association (APA) : A scientific and professional organization that represents psychologists in the United States. Most behavior and social science journals use the APA style writing format.
  • Anamnesis : An oral case history of a medical or psychiatric patient as recalled by that patient. This helps in forming a more complete diagnosis incorporating the patient's perception of his or her problem and how it affects him or her.
  • Androcentricity : Regarding man or the male sex as central, superior, or primary.
  • Anecdotal data : Based on casual observations or indications rather than rigorous or scientific analysis.
  • Anonymity : A result of not having any identifying characteristic, such as a name or description of physical appearance, disclosed to the researcher.
  • Antecedent variables : A variable that occurs before the independent variable and the dependent variable. The variable relationship formula would look like this: Antecedent variable (interest in cause) ŕ Independent variable (interest group support) ŕ Dependent variable (policy decisions).
  • Appearance : Outward or visible aspects of a person or thing.
  • Applied research : Systematic inquiry accessing and using some part of the research community's (the academy's) accumulated theories, knowledge, methods, and [Page 8] techniques for a specific patient-, student-, or client-driven purpose. It is one of the defining characteristics of professional research in education, social work, nursing research, and so on.
  • Aptitude test : An instrument or scale used to predict performance in a future situation. It's often used as a screening test to determine one's special capacities, for example, an IQ test.
  • Archival research : A form of descriptive and observational research where the researcher examines the accumulated written documents or records of a culture, for example, diaries, films, novels, newspapers, health data, and so on.
  • Archives : Records that have been accumulated over the course of an individual's or organization's lifetime. These are forms of secondary data.
  • Arm : Any part of the treatment group in a randomized clinical trial. Most trials have at least two arms.
  • Asking errors : These are mistakes that an interviewer makes in altering the questionnaire by consciously or unconsciously omitting certain questions, changing wording or the tone of the interview questions, and so on.
  • Assent form : A written ethics form that documents agreement to participate by individuals who cannot give consent either because they are minors or because they are legally incompetent. Ethically, such individuals must not be enrolled in the study if they do not want to participate.
  • Assertive community treatment : A team treatment approach designed to provide comprehensive, community-based psychiatric treatment, rehabilitation, and support to persons with serious and persistent mental illnesses, such as schizophrenia. These are usually in-patient treatments.
  • Associational research : A general type of research in which a researcher looks for relationships having predictive [Page 9] or explanatory power. Both correlational and causal–comparative studies are examples of this type of research.
  • Assumption : Any assertion presumed to be true but not actually verified. Major assumptions should be described in one of the first subsections of a research proposal or report. They often underpin how the study is rationalized and conducted.
  • Attribution : Giving credit for information to the person who discovered it.
  • Audiences : Groups of people who participate in a show or encounter a work of art, literature, theater, music, or academics in any medium. When writing a research report, a researcher needs to consider who his or her target audience will be so that his or her word choice, methods, and dissemination are appropriate.
  • Audiotape recordings : Information, such as interview conversations, captured on audiotape. This helps a researcher to remember what was said. Their use requires the ethical consent of the research participants. They are often used in qualitative research.
  • Audit trails : A chronological sequence of audit records, each of which contains evidence directly pertaining to, and resulting from, the execution of a business process, system function, or research study or investigation.
  • Author affiliation : Where the author of a research study works or obtains funds to finance the research.
  • Autonomy : The power to govern oneself and make one's own decisions. It is important for research participants to be ethically respected for this principle during a study.
  • Autoregressive integrated moving average (ARIMA) : This statistic is a Box-Jenkins approach to time series analysis. It tests for changes in the data patterns pre- and [Page 10] postintervention within the context of analyzing the outcomes of a time series design.
  • Availability sampling : A method of sampling based on participants who are available and willing to participate in the study. This is a nonprobability-based sample.
  • Average : A number representing the way people score based on a summing of all scores and dividing by the number of persons. It is also called the arithmetic mean score.
  • B design : A single-participant design wherein formal measurement begins at the same time intervention is initiated.
  • BAB design : The same as an ABAB design, except that the initial baseline phase is omitted. It is used to evaluate the impact of a treatment already in place.
  • Background questions : Questions asked by an interviewer or on a questionnaire that obtain information about a respondent's background (e.g., age, occupation, etc.). These are referred to as demographic or sociodemographic questions and are often treated as independent variables in studies.
  • Bar charts : A chart with rectangular bars with lengths proportional to the values that they represent. Bar charts are used for comparing two or more values.
  • Baseline : The plotted record of a series of measurements taken prior to the introduction of an intervention in a time series design. It is used to benchmark how the individual is functioning prior to any change. It is the A phase of the study.
  • Basic research : Research carried out to increase understanding of fundamental principles of a theory or topic rather [Page 11] than research that is designed to be applied to practical pursuits. This need not be empirical research and is also referred to as pure research.
  • BC design : A single-participant research design intended to develop an understanding of the potential effects of two different types of interventions (B and C).
  • BCBC design : Time-lapse, single-participant research that compares two different types of interventions (B and C). It permits stronger inferences than the BC design.
  • Before-after design : See one-group pretest-posttest design, also called a pre- and posttest design.
  • Behavior questions : See experience questions.
  • Behavioral observation code : A written document laying out a number of categories of types of behavior that will be measured for a study and how they will be defined. It is usually done in a single chart, ticking the incidents one is observing, for example, by check marks.
  • Bell-shaped distribution : The normal way that averages are dispersed with a mean of zero and a variance of one. It is called the bell curve because the graph of its probability density resembles a bell. It is also called the normal curve.
  • Belmont Report : A report created by the former United States Department of Health, Education, and Welfare (later renamed Health and Human Services) in 1979, which established the basic federal ethical principles that still guide the use of human participants for research conduct.
  • Benchmark : This refers to a standard or point of reference against which program processes or outcomes can be compared, typically used in program evaluation studies.
  • Beneficence : A primary ethical concern of social research. It refers to both doing no harm to people you are studying [Page 12] and, at the same time, promoting a common good for individuals in the research community because of your study. Its origin in present-day social research in America can be traced back to the Belmont Report.
  • Best practices : A technique, method, process, activity, incentive, treatment, or intervention that is more effective at delivering a particular outcome than another technique, method, process, and so on. These are typically based on empirical studies and findings.
  • Bias : Having a tendency, prejudice, or preference toward a particular perspective. Six prominent forms of this in social research and evaluation include class, gender, race, cultural, social status, and prestige.
  • Bibliographies : An alphabetical list of books and other works, such as journal articles or websites, used in a research report.
  • Biometrics : Also called biostatistics, this is the science of collecting and analyzing biologic or health data using statistical methods.
  • Black box evaluation : Evaluation of program outcomes without the benefit of an articulated program theory or an understanding of why an intervention may work.
  • Blanket consent forms : General written contracts given to research participants to waive their rights. Blanket consents are not as ethically appropriate as consent forms that are more specific to the particular study being done.
  • Blind, blinded experiment : A study in which the researchers do not tell the participants if they are being given a test treatment or a control treatment in an effort to reduce bias in the results.
  • Blind review : A process in which the peer adjudicators of a refereed manuscript are not told the author's identity.
  • Boolean operators : Terms such as and, or, and not, used to express the relationship of one term to another when searching electronic databases.
  • Callbacks : A technique where the researcher follows up when conducting a survey by telephone or e-mail to remind participants to complete their questionnaires and to return them in a timely fashion.
  • Campbell Collaboration : An international nonprofit organization that produces systematic reviews of research evidence and is focused on the fields of social welfare, education, and criminal justice.
  • Captive audience : A group of people exposed to a study condition that is in some way involuntary. For instance, when an audience is exposed to a commercial at a movie, that audience is a captive audience for the commercial being aired.
  • Carryover effect : A treatment or condition that transposes from one experiment or phase of an experiment to another. There is a possibility of this whenever subjects perform in more than one experimental condition or are given a treatment and it is stopped and then reintroduced.
  • Case-control study : A study that involves identifying participants who have the outcome of interest (cases) and participants without the same outcome (controls) and assessing if they had the exposure of interest.
  • Case-level research design : A study organized to explore a particular intervention with one participant in depth. It focuses on descriptive detail rather than scope and is often used in qualitative research.
  • Case management : This refers to coordination of services to help meet an individual's social service needs, usually when the person has a health or mental health condition that requires multiple services from providers.
  • Case series : A report on a series of participants with an outcome of interest. No control or comparison group is involved.
  • Case study : An in-depth investigation of an individual, group, or institution to determine the variables, and relationships among the variables, influencing the current behavior or status of the participant of the study. This is one of the main methods of qualitative research.
  • Categorical data, variables : Data (variables) that differ only in kind, not in amount or degree. Nominal data are categorical: for example, female versus male, true versus false.
  • Categorical level of measurement : See measurement level.
  • Category scheme : A way to organize information under topics, for instance, listing health problems under cardiac, nephrology, respiratory, endocrine, and oncology.
  • Causal-comparative research : Research to determine the causes for, or consequences of, existing differences in groups of individuals; also referred to as ex post facto research.
  • Causal modeling : This refers to a multivariate statistical procedure that calculates the significance and strength of the relationships among a set of independent variables, as well as the relationship between each independent variable and a dependent variable, usually presented in a path diagram.
  • Causal questions : Inquiries asking how something affected or influenced something else. They normally explore a problem with a manipulated intervention impacting an outcome.
  • Causal relationship : The relationship between an event (the cause, usually the independent variable) and [Page 15] the resulting event (the effect, usually the dependent variable), where the second event is considered a consequence of the first.
  • Causal validity : The degree of certainty to which it can be said that an intervention caused the outcome.
  • Census : Secondary data collected by the government from every member of a population to determine trends and comparative, updated information.
  • Census divisions, regions : The nine U.S. census divisions are (1) New England, (2) Middle Atlantic, (3) South Atlantic, (4) East South Central, (5) West South Central, (6) East North Central, (7) West North Central, (8) Mountain, and (9) Pacific. The four census regions are (1) Northeast, (2) South, (3) Midwest, and (4) West.
  • Central tendency : A measure of where the center of a distribution lies. The three most commonly used indices are mean, or arithmetic average; median, or that number or point that divides the data set in half; and mode, or the most frequently occurring number or value.
  • Centrality of purpose : Maintaining a focus on the mission or initial intent of the study to assess whether the purpose is threaded throughout the following method, results, conclusions, and implications.
  • Chance : The likelihood of the occurrence of an event.
  • Chaos theory : A theory and methodology of science that emphasizes the rarity of general laws, the need for large databases, and the importance of studying exceptions to overall patterns. It looks at random events, sudden changes, reversals, and paradoxical trends.
  • Cheating : Dishonesty or falsification of any kind with respect to examinations, course assignments, or alteration of records. It would be cheating and unethical, as a [Page 16] researcher, to alter one's hypothesis, after a study was completed, to match the results found.
  • Checklists : A list of items (e.g., names or tasks, etc.) to be completed, consulted, and then compiled.
  • Chi-square ( X 2 ) : A nonparametric test of statistical significance appropriate when the data are in the form of frequency counts. It compares frequencies actually observed in a study with expected frequencies to see if they are significantly different.
  • Citation : A reference to a source of information that could be published or unpublished.
  • Clarity : The quality of something that is free from obscurity and easy to understand.
  • Classical experimental design : A design that has four nonmutually exclusive criteria: a comparison or control group; randomization or random assignment to condition; specification of a test or null hypothesis; and manipulation of the independent variable. The objective of this design is to assess the impact of the independent variable(s) on the dependent variable(s). The assumption is that the dependent variable in the experimental group will change in a specific way (the hypothesis) and that the dependent variable in the control group will not change.
  • Classical notation system : In experimental designs, there are certain symbols used to describe the main features of the designs. These include R, random assignment to condition or treatment; X, exposure to treatment or intervention; and O, observational period of assessment.
  • Client-Oriented Practical Evidence Searches (COPES) : A model that assists evidence-based practitioners in formulating questions for databases that will get them the most applicable questions and results.
  • Client outcomes : The short- and long-term benefits that people receive from services or interventions by human services organizations. They are typically found in evaluation studies.
  • Client service delivery system : An organization set up to link community services, like Meals on Wheels or Temporary Assistance for Needy Families, to the people who need them.
  • Clinical case identification model : A way of diagnosing problems, like those defined by the DSM-IV Manual, in which symptoms are compared to norms to create a specified diagnosis.
  • Clinical cutting points, scores : A benchmark point or score that sets the line where someone is, either in a problem area or not. For instance, if the cutoff for depression is symptoms experienced for 2 months, this is where the distinction will be made.
  • Clinical evidence : Proof of a theory, or validation of case evidence or effectiveness, that comes from experience in the field rather than from formal research studies or statistical significance.
  • Clinical practice guidelines : A systematically developed set of standards designed to assist the clinician and participant make decisions about appropriate care for specific clinical circumstances.
  • Clinical trial : An experiment comparing the effects of two or more health-care interventions. It is an umbrella term for a variety of designs of health-care trials, including uncontrolled trials, controlled trials, and randomized controlled trials. This is also called an intervention study.
  • Closed-ended question : A question with a list of responses from which the respondent chooses an answer, also referred to as a closed-form item, for example, yes-or-no answers, or a three-point rating of always, sometimes, or never, and so on.
  • Cluster diagram : A type of nonlinear graphic organizer that can help systematize the generation of ideas based on a central topic. Using this type of diagram, a researcher can brainstorm a theme, associate about an idea, or explore a new participant. This is also called a cloud diagram.
  • Cluster sampling : The selection of groups of individuals, called clusters, rather than single individuals (e.g., counties, schools, agencies). All individuals in a unique group or cluster are purposely included in the sample selected; the clusters are preferably selected randomly from the larger population of clusters. It can be used for selecting both probability and nonprobability samples.
  • Cochrane Collaboration : An international organization whose aim is to help people make well-informed decisions about health care by preparing, maintaining, and ensuring the accessibility of systematic reviews of professional literature.
  • Cochrane Database of Systematic Reviews : A database of systematic reviews created by a group of over 11,500 volunteers in more than 90 countries.
  • Code of ethics : A set of moral principles created by professional associations to provide guidelines for the professional, ethical behavior and conduct of their members, who hold professionally licensed degrees.
  • Coding : The process of converting information obtained on a participant or unit into values (typically numeric) for the purposes of data storage, reduction, management, and analysis.
  • Coefficient of determination ( r 2 ) : The square of the correlation coefficient ( r ). It indicates the degree of linear relationship strength between two variables.
  • Coercion : To compel or force someone to act or think in a certain way by the use of leverage, power, intimidation, [Page 19] threats, or pressure. This is not allowed with research participants, as their participation must be voluntary.
  • Cognitive interviewing : The administration of draft survey questions while collecting additional verbal information about the survey responses. It is used to evaluate the quality of the response or to help determine whether the question is generating the information that the author intends.
  • Cohort : A group of individuals sharing certain significant characteristics in common, such as sex, age, time, place of birth, and so on.
  • Cohort analysis : The separation of each of two groups into component parts and comparing the results of one group with those of another to see if there is a group difference.
  • Cohort study : An observational study in which a defined group of people (the cohort) is followed over time. The outcomes of people in subsets of this cohort are compared to examine those who were exposed or not exposed (or exposed at different levels) to a particular intervention or other factors of interest.
  • Collateral observers : People other than the person studied who can provide additional information. For instance, a researcher might both interview a participant about his or her behavior and then also ask his or her roommate or significant others.
  • Compare-and-contrast questions : Inquiries that lead a person to consider how two or more topics are alike and simultaneously how they are different.
  • Comparison group : The group in a research study that receives a different treatment from that of the experimental group, a placebo treatment, or no treatment. It is sometimes referred to as the untreated or control group.
  • Compensatory equalization, threat : The use of an experimental treatment that has actual or potential value to participants in which authorities or participants may be unwilling to tolerate an imposed inequity in the distribution of the treatment for ethical reasons. This is a threat to the internal validity of the findings of a study.
  • Compensatory rivalry, threat : This occurs when the comparison group knows what the program group is getting and, therefore, develops a competitive attitude toward them. This is a threat to the internal validity of the findings of a study.
  • Competent : When an individual has the physical, mental, and emotional ability to answer and understand research questions posed to him or her.
  • Complete observer : A role that requires a researcher's identity to remain hidden when engaging in a study. The researcher makes observations of the setting by using devices, such as a hidden video camera, or by remaining invisible behind a one-way mirror or a screen to avoid detection unobtrusively.
  • Complete participant : A researcher who collects data by participating in the daily lives of those he or she is studying. This method is typically used in qualitative research.
  • Compliance : The act of following orders and adhering to rules and policies. In research, it refers to participants following the protocols for study and data collection procedures.
  • Composition question : An inquiry that is structured in such a way as to make a research participant give a detailed answer, usually in the form of an essay.
  • Comprehension : An ability to understand the meaning or importance of something.
  • Computer-assisted telephone interviewing (CATI) : A data collection method in which researchers use random [Page 21] digit dialing to phone potential research respondents, ask questions as directed by the computer, and key the responses directly into the system.
  • Computerized databases : Information that has been organized and loaded into computer files for retrieval. Census or hospital data are examples of such.
  • Computerized, interactive voice responses : A type of technology that allows a computer to detect voice and keypad inputs.
  • Concept : An abstract or general idea inferred or derived from specific instances. Concepts exclude variables that are dimensions of them, which can be operationalized and studied.
  • Conceptual classification system : Putting data into discrete conceptual categories to analyze them. For example, in quantitative research, factor analysis loads items that hang together empirically in a conceptual category on a scale or instrument.
  • Conceptual framework : A type of intermediate configuration of variables or aspects of a theory that have the potential to connect to all aspects of inquiry (e.g., problem definition, purpose, literature review, methodology, data collection, and analysis).
  • Concurrent validity : The degree to which the scores on an instrument are related to the scores on another instrument administered at the same time or to some other criterion available at the same time. This is a test that can be used to validate a measuring instrument.
  • Confidentiality : Prevention of disclosure, other than to authorized individuals, of a client's proprietary information, investigation findings, or of a participant's identity. It is a requirement of ethical consent to conduct research.
  • Confirmability : Capability of being tested (verified or falsified) by an experiment or observation.
  • Confirmatory factor analysis (CFA) : The use of factor and item analysis to confirm the underlying dimension(s) in an empirical assessment of the internal structure of a scale, instrument, or measure. It is one of the two main terms of factor analysis, the other being exploratory.
  • Conflict of interest : A situation in which someone in a position of trust has competing professional or personal interests. The perception of conflict is still a conflict of interest.
  • Consistency : Logical coherence and accordance with the facts. It refers to a pattern of repetition in answers, questions, use of measurements, observations, and so on.
  • Constant : A characteristic, variable, or influence that has the same value or impact for all individuals in a study.
  • Constant comparison method : A technique for analyzing qualitative data in which data in the form of field notes, observations, interviews, and the like are coded, and then each segment of the data is taken in turn, and compared with one or more categories to determine its relevance, and compared with other segments of data similarly categorized.
  • Constant measurement error : An error that affects all items comprising a group in a similar manner and to a similar magnitude. Repeated errors are caused by a flaw in the system (such as in the calibration of a measuring device), occur in the same direction, and therefore do not cancel each other out. This is also called systematic error.
  • Constitutive definition : Defines a concept with other concepts and constructs, establishing boundaries for the construct under study and stating the central idea or concept under study; for example, dementia in a study could be just cognitive, not behavioral impairment.
  • Construct validity : The extent to which scores on a particular test represent the actual distribution of the [Page 23] characteristic that the test is supposed to assess. This is a test that standardizes a measuring instrument.
  • Content analysis : The analysis of recorded human communications, such as books, websites, facts, trends, paintings, or laws. It is most commonly used by researchers in the social sciences to analyze recorded transcripts of interviews with participants or conduct secondary analysis of published materials to discern patterns. It is often used in qualitative research studies.
  • Content validity : A nonstatistical type of appraisal that involves the systematic examination of the test content to determine whether it covers a representative sample of the behavior domain to be measured. It is concerned with a test's ability to include or represent all of the content of a particular construct.
  • Context effect : The influence of environmental factors on one's perception of a stimulus; for example, the halo effect refers to a situation in which someone or something is judged good or bad in one category and is then uncritically judged good or bad in other categories.
  • Contingency coefficient : An index of relationship derived from a cross break or Chi-square ( X 2 ) table.
  • Contingency question : A question whose answer depends on the answer to a prior question.
  • Contingency table : A table used to record and analyze the relationship between two or more variables, most usually categorical variables. Chi-square ( X 2 ) tables are forms of these.
  • Continuous data : Data with a potentially infinite number of possible values within a given range, for example, height and weight.
  • Contradictory evidence : Results that disprove a hypothesis or seem to deny a researcher's assumptions or claims. These should be acknowledged in a research report.
  • Contraindication : A specific research circumstance when the use of certain treatment in a study could be harmful to participants.
  • Control : Efforts on the part of the researcher to remove the effects of any extraneous factors, influences, or variables, other than the independent variable, that might affect performance on a dependent variable.
  • Control event rate (CER) : See treatment effects.
  • Control group : The comparison group in a research study that receives standard treatment, placebo treatment, or no treatment rather than being exposed to the independent variable (or experimental treatment).
  • Control variable : A variable that does not change or is held constant in the study in order to analyze the relationship between other variables without interference.
  • Convenience sample : A sample that is easily accessible. This does not represent the whole population and may be chosen because of time, availability, or financial constraints. It is also sometimes referred to as a grab or opportunity sample. This is a nonprobability-based sample.
  • Convergent validity : The degree to which an operation, instrument, or scale is similar to (centers on) other operations, instruments, or scales, to which it theoretically should also be similar. This is a form of validity of measurement.
  • Correlated variation : The Darwinian theory, which says the whole system or organization is so tied together during its growth and development that, when slight variations in any one part occur and are accumulated through natural selection, other parts then become modified.
  • Correlation ( r ) : A statistical measure of the degree of association or relationship between or among two or more variables. It can be positive or negative and ranges from −1.00 to 1.00.
  • Correspondence : Communications between individuals, often through letters, memos, flyers, or e-mails.
  • Cost-benefit analysis (CBA) : An assessment of whether the cost of an intervention is worth the benefit by measuring both in the same units; monetary units are usually used. It is typically used in program evaluation studies.
  • Cost consideration : When a researcher takes into account what negative results the study might have for participants compared with what benefits will be gained by the study.
  • Cost-effectiveness or efficiency analysis (CEA) : A measure of the net cost of providing an intervention as well as the outcomes obtained. Outcomes are reported in a single unit of measurement. It is typically used in program evaluation studies.
  • Cost-minimization analysis : When conducting such a study, the researcher needs to measure all costs inherent to the delivery of the therapeutic intervention. If effects are known to be equal, only costs are analyzed, and the least costly alternative is chosen.
  • Cost-utility analysis : A measure that converts health effects into personal preferences (or utilities) and describes how much it costs for some additional quality gain (e.g., cost per additional quality-adjusted life year, or QALY). It is typically used in program evaluation studies.
  • Counterfactual condition : Projecting or providing empirical evidence about what would have happened if the intervention program was not offered. It is typically used in program evaluation studies.
  • Covariation : As the values of one variable change (either increasing or decreasing), the values of the other variable(s) also change. Covariation can be either positive or negative.
  • Cover letters : Letters sent along with other documents to provide additional information, for example, study purpose, design, and so on, in a survey or research interview.
  • Cox model : This is a semiparametric statistical technique used to analyze the survival of patients in clinical trials. Using regression analysis, it provides an estimate of the treatment effect on survival after adjustment for other explanatory variables.
  • Credibility : The quality of being believable or trustworthy.
  • Criterion-related validity : The extent to which the measurement correlates with an external criterion of the phenomenon under study. A typical way to achieve this is in relation to the extent to which a score on a personality test can predict future performance or behavior (predictive validity). Another way involves correlating test scores with another established test that also measures the same personality characteristic (concurrent validity). This is a test that standardizes a measuring instrument.
  • Criterion sampling : A procedure whereby cases are chosen that meet the same criterion. It is also called judgment sampling and is used in many program or practice evaluation studies. This is a nonprobability-based sample.
  • Criterion variable : This is the change variable that is impacted by independent variables in the study. It is also called the main dependent variable.
  • Critical appraisal skills : The ability of assessing and interpreting research or evidence by systematically considering its validity, results, and relevance.
  • Critical thinking : This involves evaluating the various dimensions of a research phenomenon (e.g., study questions, assumptions, methods used, etc.). One then synthesizes, assesses, and analyzes these and appraises their interrelatedness.
  • Critical value : A value taken from a statistical table. It serves as the criterion for determining whether the corresponding [Page 27] data-based statistic is large enough to be considered as evidence against the null hypothesis.
  • Cross break table : See contingency table.
  • Cross-population generalizability : The ability to generalize from findings about one group, population, or setting to other groups, populations, or settings.
  • Cross-population sampling : The process through which a group of representative individuals is selected from multiple populations for the purpose of statistical analyses.
  • Cross-sectional study : The observation of a defined population at a single point in time or time interval. Exposure and outcomes are determined simultaneously.
  • Crossover study, design : The administration of two or more experimental therapies, one after the other, in a specified or random order to the same group of participants.
  • Culture : A set of learned beliefs, mores, values, and behaviors shared by members of a society.
  • Curiosity : An emotion that is said to cause natural inquisitive behavior, such as exploration, investigation, and learning. All research has the scientific tenet of a driving curiosity.
  • Current population survey : A snapshot assessment of a trend in the population, for example, a statistical survey conducted by the United States Census Bureau for the Bureau of Labor Statistics (BLS). The BLS uses these data to provide a monthly report on the employment situation, which reports estimates of the number of unemployed people in the United States.
  • Cutoff scores : The lines dividing the criterion of one diagnostic benchmark from another. For instance, an IQ score of 49 would be part of a diagnosis for moderate retardation, while 50 would signify mild retardation.
  • Data : A collection of information and facts from which conclusions may be drawn. This may consist of numbers, words, or images, particularly as measurements or observations of a set of events, behaviors, or variables.
  • Data analysis : Reducing empirical data in forms to be better understood and interpreted.
  • Data collection : A systematic process of gathering information to be studied. This process should be checkable and verifiable.
  • Data encryption : A method of securing transmitted data through encoding to ensure that sensitive and confidential information remains safeguarded and private.
  • Data mining : This is the process of revisiting or digging back into collected data for further analysis.
  • Data points : Individual points where a value can be plotted on a line, a bar, or a pie chart. These are individual pieces of information from which decisions about data sets may be made.
  • Data recording method : The method of the preservation, collection, or registration of individual elements of information in a study.
  • Data sets : Collections of information, usually presented in numerical or tabular forms.
  • Data sources : Documents, people, and observations that provide information for assessment, research, or evaluation.
  • Database, computer : A collection of information stored on a computer storage medium in a common pool for access on an as-needed basis.
  • Debriefing sessions : Typically a one-time, semistructured conversation with an individual who has just participated in a study. This provides the individual with information he or she might need to achieve closure, especially if the study has caused emotional or physical stress.
  • Deception : The act of convincing another to believe information that is not true, or not the whole truth, as in certain types of half-truths. This is not allowed in most research, and when used, it must not cause harm and must usually be approved by an institutional review board (IRB).
  • Decision analysis, clinical decision analysis : The application of explicit, quantitative methods that quantify prognoses, treatment effects, and participant values in order to analyze a decision under conditions of uncertainty.
  • Deductive logic : See deductive reasoning.
  • Deductive reasoning : Systematic consideration that begins with a general principle and concludes with a specific instance that demonstrates the general principle. This is the application of a philosophy. It is typical of qualitative research approaches.
  • Degrees of freedom (df) : The number of values in the final calculation of a statistic that are free to vary.
  • Demographic data : Background information relating to statistical characteristics of human populations (e.g., age, gender, race, income, etc.). These data are typically used as independent variables to subgroup variables to compare cohorts in data analysis.
  • Demoralization, threat : A potential issue in controlled experiments in which those in the control or comparison group become resentful of not receiving the experimental treatment. Alternatively, the experimental group could be resentful of the control group if the experimental group perceives its treatment as stressful or inferior. This [Page 30] may be a threat to the internal validity of the findings of a study.
  • Dependability : The importance of the researcher accounting for, or describing, the changing contexts and circumstances of a study. Dependability may be enhanced by altering the research design as new findings emerge during data collection. Dependability is analogous to reliability, that is, the consistency of observing the same finding under similar circumstances. This is fundamental to qualitative research.
  • Dependent t-tests : A data analysis procedure that assesses whether the means of two related groups are statistically different from each other, for example, one group's mean score at time one compared with the same group's mean score at time two.
  • Dependent variable : What is measured in an experiment and what is affected during the experiment. The dependent, change, or criterion variable responds to the independent variable. It is called dependent because it depends on the independent variable. In outcome studies, the dependent variable is the outcome measure, and the independent variable the treatment.
  • Descriptive case-level design : A design in which the major research emphasis is on describing the characteristics of one individual or case. It is often used in qualitative research.
  • Descriptive-comparative question : An inquiry that asks, “Is group A different than group B?”
  • Descriptive group-level research, design : A design in which the major emphasis is on describing the characteristics of groups of people, such as families, organizations, and communities. It is often used in qualitative research.
  • Descriptive research, study : Research that examines in depth situations as they are, identifying characteristics of [Page 31] observed phenomena and exploring possible associations among phenomena. These types of studies cannot determine causal relationships. This is a major classification of quantitative research.
  • Descriptive statistics : Numbers used to describe the basic features of sample data in a study. They provide simple summaries about the sample and its measures, for example, mean, median, mode, variance, or standard deviation. Descriptive statistics form the beginnings of most quantitative studies’ data analysis processes.
  • Diagram : A pictorial or graphic means of showing all the fields and tables in a database and how they are related.
  • Diary : A recording of information in the form of chronological events. It may include a logbook or pictorial, printed, or audiovisual notations. This is one of the main qualitative techniques used.
  • Dichotomy : Any splitting of a whole into two nonoverlapping parts.
  • Differential research participant selection : A sampling process in which the participants selected for a study are different from one another to begin with in some way that is significant for the study. This can be a problem if researchers cannot randomly assign participants to groups and instead are forced to use preexisting groups.
  • Diffusion of treatment, threat : This occurs when a comparison group learns about a research program from other program participants, keeping the control and experimental groups from remaining distinct. This may be a threat to the internal validity of a study.
  • Direct costs : The actual dollar costs associated with the operation of a program, including all operating and support costs noted on the program's budget.
  • Direct observation : A research technique in which a researcher watches and records behaviors or events as they actually occur.
  • Directional hypothesis : One type of hypothesis or educated guess about what the results of a study will be, predicting the way that the independent will affect the dependent variable.
  • Discreditable group : Those who possess qualities that are not acceptable to members of the dominant culture. This power dynamic is important to understand in conducting research with disempowered groups, for example, vulnerable populations, involuntary or captive samples, and so on.
  • Discrete variable : A variable that takes values from a finite or countable set, such as the number of days one stayed in a hospital in a year.
  • Discriminant validity : The degree to which the perationalization is not similar to (diverges from) other operationalizations that it theoretically should not be similar to. A successful evaluation of discriminant validity shows that a test of a concept is not highly correlated with other tests designed to measure theoretically different concepts. This is a form of validity of measurement.
  • Discrimination : Unfair treatment of a person or group on the basis of prejudice or biases, either stated or implied.
  • Discussion of findings : The subsection near the end of a scientific paper or research report that pulls all of the information together. It typically uses previously collected literature to corroborate or refute the results found in a study.
  • Disinterestedness : Freedom from bias or selfish motives. A researcher should cultivate this principle.
  • Disproportionate stratified sampling : A sampling method in which the proportion of the groups in the sample purposely does not match the proportions in the population. A typical [Page 33] use is oversampling of certain subpopulations (e.g., African Americans) to allow separate statistical analysis of adequate precision. This is usually a nonprobability-based sample.
  • Dissemination of research findings : Publishing the results of a research study to public forums to impart knowledge.
  • Divergence : The evolution of increasing difference between lineages in one or more factors or characteristics.
  • Diversity : The variation in society of culture and other factors, including differences in age, race, gender, disability, physical abilities, sexual orientation, religion, and so on.
  • Double-barreled questions : A question that lends itself to two concurrent possible responses, for example, “Do you plan to retire early, and do you have good health?” Surveys must avoid such and questions, as when answered, it is not known which part of the question is being responded to.
  • Double-blind experiment : An especially strict way of conducting a research experiment, usually on human participants, in an attempt to eliminate participant bias on the part of both experimental participants and the experimenters. In a double-blind experiment, neither the participants nor the researchers know who belongs to the control group and who belongs to the experimental group. Only after all the data have been recorded (and, in some cases, analyzed) do the researchers learn which individuals are which.
  • Double standard : Refers to one class of entities being treated differently from another class of entities and implies an unfair or unjustified differentiation. It sometimes exists in comparison-treatment group studies.
  • Dropout rates : The number of research participants who quit participating in the study before the study is complete, sometimes referred to as attrition.
  • Ecological research perspective : A perspective of research that focuses on interrelational transactions between systems and stresses that all existing elements within an ecosystem play an equal role in maintaining the balance of the whole.
  • Ecological survey : A collection of information based on aggregate data for some population as it exists at some point or points in time; it's used to investigate the relationship of an exposure to a known or presumed risk factor for a specified outcome.
  • Economic efficiency : The net social value of a project or program, estimated by subtracting the discounted social costs from the discounted social benefits. It is typically used in program evaluation studies.
  • Effect size (ES) : An index used to indicate the magnitude of an obtained change or result or relationship between time one and time two observations. Cohen's d is the most commonly used statistic to compare mean score differences. Approximately speaking, ESs can be small, < 0.3; medium, = 0.5; or large, > 0.75.
  • Effectiveness study : A study evaluating a treatment, conducted under clinically representative, or real-life, conditions. It is usually used in the later stages of evaluating a new intervention.
  • Efficacy, drug or treatment : The maximum ability of a drug or treatment to produce a result regardless of dosage. The Food and Drug Administration (FDA) mandates that clinical trials go through three sequential phases of efficacy testing prior to their administration.
  • Efficacy study : A study conducted under conditions of maximum experimental control (e.g., carefully screened clients, highly trained therapists using detailed treatment manuals).
  • Efficiency : The degree to which outcomes are achieved in terms of input and resources allocated. Efficiency is a measure of performance in terms of which management may set objectives and plan schedules and for which staff members may be held accountable. It is typically used in program evaluation studies.
  • Electronic database : Files of information for research access that have been stored through electronic methods, such as a CD, DVD, or jump drive or in an online format.
  • Electronic survey : A method of gathering information from respondents over the Internet, such as through e-mail or instant messages, or by using LISTSERVs to contact study respondents.
  • Element : One of the simplest or essential parts or principles of which anything consists or upon which the constitution or fundamental powers of anything are based.
  • Empathy : The capacity to truly recognize or understand another's state of mind or emotion that is seen or sensed.
  • Empirical : Based on observable and checkable evidence. It relies on the replication of findings and systematic observations gathered using certain standards of evidence.
  • Empiricism : A pursuit of knowledge purely through experience, especially by means of observation and also by experimentation.
  • Encryption : Any process for disguising information to protect it from unauthorized viewing or use. It is often used for files transmitted over the Internet to safeguard confidentiality.
  • Environmental factor : A factor in the surroundings of a program that may have an effect on it and on the intended outcomes, for example, the demographics of a community.
  • Environmental scan : An analysis of trends and key factors in an organization's environment that may have an impact on it.
  • Epidemiology : The study of the health of populations and communities, not just particular individuals. That branch of medicine or public health that deals with the study of the causes, distribution, and control of disease in communities or populations.
  • Epistemology : A branch of philosophy that investigates the origin, nature, methods, and limits of human knowledge.
  • Equipoise : A state of uncertainty where a person believes it is equally likely that either of two treatment options is better.
  • Equivalent forms method : A way of checking consistency by correlating scores on similar forms of an instrument taken by the same participant. It is also referred to as alternative-forms reliability and equivalent-forms reliability. This is a test that standardizes a measuring instrument.
  • Error of central tendency : A rating error occurring when the rater displays a propensity to assign only average ratings to all individuals being assessed.
  • Error of measurement : The difference between the actual value of a quantity and the value obtained by a measurement. Repeating the measurement may improve (reduce) the random error (caused by the accuracy limit of the measuring instrument) but not the systemic error (caused by incorrect calibration of the measuring instrument).
  • Estimation, statistical : Estimation is the process by which sample data are used to indicate the value of an unknown quantity in a population. Results of estimation can be expressed as a single value, known as a point estimate, or as a range of values, known as a confidence interval (CI).
  • Ethical issue : A question concerning what is moral or right. All research must be scrutinized for the safeguarding of ethics of its participants and experimenters who conduct the research.
  • Ethnicity : Identifying characteristics shared by a group, such as culture, custom, race, language, religion, or other social distinctions.
  • Ethnocentrism : The tendency to look at the world primarily from the perspective of one's own culture or heritage.
  • Ethnography, ethnographic research : Studying cultures to learn more about their interactions, values, meanings, behaviors, language, and worldview. This is one of the main methods of qualitative research.
  • Evaluation : A systematic assessment of merit, worth, and significance of something or someone based on a set of standards.
  • Evaluation assessment : A study of the options for conducting an evaluation of a program, including the purpose, proposed methods, stakeholders, and dissemination plan. It is also called an evaluability assessment.
  • Evaluation research : A systematic inquiry to describe or assess the intervention impact of a specific program or intervention on individuals by determining its activities and outcomes. These can be evaluations of practice or programs.
  • Event history analysis : This is also called survival analysis, duration analysis, hazard model analysis, failure–time analysis, or transition analysis. It is an umbrella term for a set of procedures for time series analysis and is used in studies where the phenomenon of interest is duration-to-event, where events are discrete occurrences. It computes data units with time, changes, and factors influencing them. It can be parametric, semiparametric, or nonparametric.
  • Event rate : The proportion of research participants in a group in whom the event is observed. Thus, if out of 100 participants, the event is observed in 27, the event rate is 0.27. Control event rate (CER) and experimental event rate (EER) are used to refer to this in control and [Page 38] experimental groups of participants, respectively. The participant expected event rate (PEER) refers to the rate of events we would expect in a participant who received no treatment or conventional treatment.
  • Evidence-based health care : Work that extends the application of the principles of evidence-based medicine (EBM) to all professions associated with health care, including purchasing and management.
  • Evidence-based medicine (EBM) : The conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual participants. The practice of evidence-based medicine requires the integration of individual clinical expertise with the best available external clinical evidence from systematic research and the patient's or client's unique values and circumstances.
  • Evidence-based practice (EBP) : See evidence-based medicine, except applied to areas distinct from health care, such a social work, public policy, administration, supervision, or community practice. EBP may be applied to all areas of practice, micro through macro.
  • Ex ante evaluation : An evaluation that is conducted before a program is implemented. It is typically used in program evaluation studies.
  • Ex post evaluation : An evaluation conducted after a program has been implemented. It is typically used in program evaluation studies.
  • Exchangeability : A statistical concept in which two factors may be changed without affecting the results. The factors may be, for example, the throws of a coin. Such throws are exchangeable if the order in which they are done is irrelevant for the probabilities of possible outcomes—the probabilities being the results.
  • Executive summary : A brief overview of the project's purpose, scope, methods, results, conclusions, findings, and recommendations. It is typically placed at the beginning of the research study or technical research report.
  • Existence question : An inquiry about whether something still exists or there is nothing.
  • Existing knowledge : Information that already is available about a topic. This can be drawn upon for literature reviews, questioned, and possibly reinforced or refuted by research. It is also referred to as extant knowledge.
  • Expected frequency : This is a theoretical, predicted occurrence obtained from an experiment to be true, unless statistical evidence in the form of a hypothesis test indicates otherwise.
  • Experience : The accumulation of knowledge or skill over time, which is used to advance an understanding of something.
  • Experience question : An inquiry by a researcher in a study to find out what sorts of things an individual is doing or has done, which could shape his or her answers to other questions.
  • Experiment : A research study in which one or more independent variables is systematically varied by the researcher to determine its effects on dependent variables.
  • Experimental design : A research method that tests the relationship between independent (treatment) and dependent (outcome) variables. In nomothetic research, a true experimental study must meet all of the following criteria: (1) randomization, (2) a manipulated treatment condition (X), (3) a comparison or control group that does not receive any treatment condition, and 4) a specification of hypotheses.
  • Experimental event rate (EER) : The proportion of participants in the experimental treatment group who are observed to experience the outcome of interest.
  • Experimental group : The group in a research study that receives the treatment (or method) of special interest in the study.
  • Experimental rigor : This describes a well-designed research study conducted in a methodologically sound way, which follows proper scientific and ethical protocols.
  • Experimental variable : The factor or treatment condition that is manipulated (systematically altered) in an intervention study by the researcher.
  • Experimenter bias : See experimenter expectancy, threat.
  • Experimenter expectancy, threat : When the biases of individuals conducting a study influence the outcomes. It is also referred to as experimenter bias. This is a threat to the internal and external validity of the findings of a study.
  • Explanatory design : A research design in which the researcher attempts to identify cause-and-effect relationships.
  • Exploration : To study in depth a phenomenon, event, population, intervention, interaction, culture, and so on, in order to acquire more knowledge about it.
  • Exploratory case-level design : A study that examines in detail one intervention, providing information about how well that treatment intervention is working and indicating whether a client's problem has been resolved. When studies with this design are compiled, success of a program can be gauged.
  • Exploratory design : A research design in which the researcher investigates an area in which little information exists. The aim is to gain more information before doing more thorough research. Often, this design helps researchers to learn to ask the right questions as an outcome of their conduct. It is sometimes referred to as a level one study.
  • Exploratory factor analysis (EFA) : A statistical technique that explores the underlying factor structure of a set of observed variables without imposing a preconceived structure on the outcome. It is often used in developing an instrument or measurement scale and is one of the two main forms of factor analysis, the other being confirmatory factor analysis.
  • Exploratory group-level research design : The arrangement of a type of study that explores a research question about which little is already known in order to uncover generalizations and develop hypotheses that can be tested later with more precise and more complex designs and datagathering techniques. In this type of design, the effects of one intervention on a group are observed and recorded.
  • Extant knowledge : See existing knowledge.
  • External criticism : Feedback received from a colleague who is uninvolved in a research project about how to make the project better.
  • External generalizability, threat : The extent to which the results of an evaluation or research study can be generalized to other times, other people, other treatments, and other places. This is a threat to the external validity of a study.
  • External validity : The extent to which the data collected from a sample can be generalized to the entire population from which the sample was drawn.
  • Extraneous event, threat : A happening that occurs during the course of a study that can affect the responses of participants, independent of the experiment. This is a threat to the internal validity of a study.
  • Extraneous question : An inquiry that is not pertinent or relevant to what is being studied. This sometimes happens when a researcher gets sidetracked in a research interview.
  • Extraneous variable : A factor or condition, other than that being studied, that makes possible an alternative explanation of results; an uncontrolled or rival variable.
  • F -test : A statistical test of the equality of the variances of two or more populations. The test compares the differences between groups and within groups over time. It is used in analysis of variance inferential tests (ANOVA).
  • Face-to-face interview : A method of gathering information from participants that involves gathering data by asking the participant specific study questions.
  • Face validity : The appearance by simple visual inspection that a measurement device assesses what it is supposed to measure.
  • Factor analysis : A multivariate technique that analyzes the underlying structure of a data set. It is useful in explaining observed relationships among a large number of variables in terms of simpler relations. It is also used to develop a new scale or instrument. Two main forms are exploratory and confirmatory.
  • Factorial design : An experimental design involving two or more independent variables, at least one of which is manipulated in order to study the effects of the variables individually, and in interaction with each other, upon a dependent variable.
  • Factorial validity : The degree to which the measure of a construct conforms to the theoretical definition of that construct in a scale or instrument being used or analyzed. This is a test that standardizes a measuring instrument.
  • Fail-safe : A device or feature that prevents total failure in the event of a fault occurring in a study.
  • False positive : A result that is erroneously positive when a situation is normal; for example, a pregnancy kit is used and its results are positive, but the woman is not pregnant.
  • Feedback : The return of information about the results of a process.
  • Feeling-type question : An inquiry by a researcher that seeks information about a person's attitudes, beliefs, motives, and emotions. Generally, feeling-type questions will generate more in-depth responses. They are typically used in qualitative research.
  • Feminist research : A body of research oriented to study the social conditions and concerns of women in society from a female gender perspective, usually conducted by women.
  • Field diary : A personal statement of a researcher's opinions about people and events he or she comes in contact with during research, written in a logbook type of document. This helps qualitative researchers remember information gathered as they observe or interact with research participants.
  • Field jotting : Quick observational notes taken by a researcher; typically used in qualitative studies.
  • Field log : A running account of how a researcher plans to, and actually does, spend his or her time in the field.
  • Field notes : The information researchers write down about what they observe and think about in the course of a study, especially in qualitative research studies.
  • Field research : A study conducted in a natural-setting, physical location, in situ, as opposed to research conducted in a laboratory or other contrived setting.
  • Figure : Data represented in a pictorial form, graph, or visual model.
  • Filter question : A question in a survey to ensure that respondents meet the required criteria for a subsequent question (or questions) in the survey.
  • Findings : The outcome of a research study; literally, what was found.
  • First-level coding : A behavior coding method that employs a systematic analysis of verbal interactions between an interviewer and a respondent. Its purpose is to identify overt problems by quantifying interviewer and respondent behaviors that connote difficulties in both asking and answering survey questions. Typically, this is accomplished by limiting the analysis to the first level of interviewer and respondent interactions because major problems present themselves when the question is first asked and when respondents initially react or respond.
  • Firsthand data : Information collected directly from a source rather than, for instance, a record of the source.
  • Fisher's Student t -test : A comparison of mean scores from independent or paired samples that follow a Student's t distribution.
  • Fixed-choice question : A form of inquiry that can normally be answered using a simple yes or no, a specific simple piece of information, or a selection from predetermined multiple choices. It is also called a closed-ended question.
  • Flexibility : The extent to which, and the rate at which, adjustments to changed circumstances are possible in a research study.
  • Flowchart : A graphic representation of the major steps in a process, system, or relationship between events.
  • Focus group : An organized discussion with specifically selected individuals to gain unique information about a particular topic. This is one of the main qualitative techniques used.
  • Focused interview : A discussion with research participants during data collection, organized around several predetermined questions or topics, but providing some flexibility in the sequencing of the questions, and without a predetermined set of response categories or specific data elements to be obtained.
  • Follow-up study : An investigation conducted to determine patterns of findings or results after some period of time.
  • Foreshadowed problem : The anticipation of what the research problems in an investigation will be, typically, before they arise.
  • Formative evaluation : An assessment designed to provide feedback and advice for improving a program. It is conducted to adjust and enhance interventions and is typically used in program evaluation studies.
  • Frame of reference : A set of assumptions, ideas, and standards that form a viewpoint from which ideas may be evaluated or researched, for example, philosophical, religious, empirical, and so on.
  • Frequency distribution : A method of showing actual results, often presented as lists ordered by quantity (high to low), indicating the number of times each value appears for a sample. It is a descriptive statistic.
  • Frequency polygon : A graphical device used for understanding the shapes of distributions. It serves the same purpose as a histogram but is especially helpful in comparing sets of data. A frequency polygon is also a good choice for displaying a cumulative frequency distribution.
  • Frequency recording : An exact count of how many times a specific behavior occurs.
  • Frequency table : This is a way of summarizing a data set. It records how often each value (or set of values) of the variable in question occurs. It may also include the percentages that fall into each category. It summarizes categorical, nominal, and ordinal data.
  • Funneling technique : Conducting an interview beginning with broad general questions and moving to narrower, more specific, and possibly more sensitive, questions.
  • Futuristic research : Also called futures research, this refers to a multidisciplinary branch of operations research whose aim is to conduct long-range planning based on forecasting from mathematical models, cross-disciplinary treatment of a subject matter, systematic use of expert judgments or opinions, and a systems analytical approach.
  • Gain score : The difference between the pretest and posttest scores of a measure in a study. It is also known as a change score.
  • Galileo : An online search engine used primarily for academic searches in libraries.
  • Game theory : A branch of applied mathematics that is used in the social sciences. It attempts to mathematically capture behavior in strategic situations in which an individual's success in making choices depends on the choices of others.
  • Garbage in, garbage out : A concept borrowed from computer information processing used primarily to call attention to the fact that computers will unquestioningly [Page 47] process the most nonsensical of input data and produce nonsensical output. It was most popular in the early days of computing but still applies today when computers can yield large amounts of information in a short time.
  • Gatekeepers : People who can help individuals to form connections with larger groups of community members.
  • Gender : The socially constructed roles, behaviors, activities, and attributes that a particular society considers appropriate for men or women.
  • General reference : A source that a researcher uses to identify more specific references (e.g., indexes, abstracts).
  • Generalizability, threat : The extent to which researchers can apply results from the experimental sample to the accessible population. This may be a threat to the external validity of a study's findings.
  • Generalizing : This refers to whether the method or findings of the study can be used or replicated in another study.
  • Genuineness : The quality of being truthful to one's self and others. This is an important ethical principle of researchers and evaluators.
  • Geographic location : The physical place or locale of something.
  • Goal : A broad statement of aims or intended outcomes for a program typically used in program evaluation studies.
  • Goal Attainment Scale (GAS) : This measures the achievement of treatment or intervention goals. When totaled, they produce a Goal Attainment Score (GAS), allowing one to track the progress of people in treatment.
  • Gold standard : A colloquial phrase used in research to describe the method, procedure, or measurement that is accepted as being the best available, against which [Page 48] data should be compared. It is often referred to as the benchmark.
  • Goodness-of-fit : A test of a statistical model that describes how well it fits a set of observations. Measures of goodness-of-fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, for example, to test for normality of residuals, to test whether two samples are drawn from identical distributions as with the Kolmogorov-Smirnov test, or to test whether outcome frequencies follow a specified distribution as with Pearson's Chi-square test.
  • Google Scholar : An online search engine that specifically searches scholarly articles and books.
  • Government report : Information put out by public agencies, such as the National Institute of Health (NIH).
  • Grade equivalent score : A score that indicates the grade level for which a particular performance (score) is typical.
  • Graph : A diagram displaying data, in particular showing the relationships or patterns between two or more variables pictorially.
  • Gray literature : This refers to the kind of research material not published in easily accessible journals or databases. It includes things like conference proceedings, abstracts, or research presented at conferences, newsletters, unpublished theses, and so on. It can be in both print and, increasingly, electronic formats.
  • Grounded theory : The systematic generation of data-based theory to develop explanations, hypotheses, concepts, typologies, meanings, and descriptions of phenomena. This is one of the main methods and requirements of qualitative research.
  • Group-administered questionnaire : A written survey generally administered to a sample of respondents in a group setting (for instance, to a focus group or students in a class), generally guaranteeing a higher response rate from a very specific group of people.
  • Group research design : A way of organizing a study that evaluates the effects of an intervention by comparing the results obtained from a group of clients who received an experimental treatment to a group who received no treatment, an alternative treatment, or a placebo treatment.
  • Grouped frequency distribution : An arrangement of scores, from highest to lowest, in which scores are grouped together into equally sized ranges called class intervals.
  • Grouping variable : A characteristic of a sample that is used to categorize participants together. For instance, income is a variable that individuals might be grouped around (high, medium, and low).
  • Guesstimate : A scientific hunch about any part of a research study based empirically on prior experience, literature, best practices, data patterns, observations during the study, clinical significance, anecdotal data, and so on. It is a morphed term combining guess and estimate.
  • Guided discussion : An interview in which, rather than asking direct or specific questions, a researcher asks more general questions to get a sense of what the respondent knows.
  • Guttman scale : A questionnaire that presents a number of items to which the person is requested to agree or not agree, typically done in a yes-or-no dichotomous format. The intent of the scale is that the person will agree with all statements up to a point and then will stop agreeing. The scale may be used to determine how extreme a view is, with successive statements showing increasingly extremist positions.
  • Halo effect, threat : The tendency to consider a person good (or bad) in one category and then to make a similar evaluation in other categories. For instance, if someone is physically attractive, one might also assume (truly or falsely) that the person is also smart.
  • Hawthorne effect, threat : A positive effect of an intervention resulting from the participants’ knowledge that they are in some way receiving special attention.
  • Helsinki Declaration of 1964 : A document containing a set of ethical principles for the world medical research community regarding human experimentation. It was developed in Helsinki by the World Medical Association (WMA) and is widely regarded as the cornerstone document of human participant research ethics. It has been revised six times, most recently in 2008.
  • Hermeneutics : The study of interpretation theory, as in interpreting the meaning of certain texts.
  • Heterogeneity : This occurs when there is more variation between the study results (in a systematic review) than would be expected to occur by chance alone.
  • Heterogeneity of irrelevancies : The theory that, even though there will be a number of random errors and variations in response sets, these inaccuracies will be different from each other and so can be considered to cancel each other out.
  • Heterogeneity, statistical : This is used two ways: (1) to describe the degree of variation in the effect estimates from a set of studies, or (2) to indicate the presence of variability among studies beyond the amount expected due solely to the play of chance.
  • Hierarchical Linear Modeling (HLM) : A type of regression model used frequently for educational data sets. With this [Page 51] type of data, HLM is often used, as it takes the issue of correlated errors into consideration and provides more realistic and conservative statistical testing. HLM considers sources of errors more rigorously than Ordinary Least Squares (OLS) regression testing.
  • Hierarchy of evidence : The relative authority of various types of research information. Although there is no single, universally accepted hierarchy of evidence, there is broad agreement on the relative strength of the principle types of research. For example, in determining the effects of an intervention, randomized controlled trials (RCTs) rank above quasi-experiments, while expert opinion and anecdotal experience are ranked even lower. Some evidence hierarchies place systematic reviews and meta-analyses above randomized controlled trials because these often combine data from multiple randomized controlled trial studies and possibly from other study types as well.
  • Histogram : A graphic representation, consisting of bars or rectangles, of the scores in a distribution; the height of each indicates the frequency of each score or group of scores.
  • Historical account : Using information from the past to research the phenomena of study.
  • Historical-comparative research : Systematically studying past questions and events using methods of research or evaluation to inform possible outcomes and answers to current questions and events.
  • Historical data : Information about events that happened in the past or research results that were collected in the past. These are deemed secondary data.
  • Historical research : The systematic collection and objective evaluation of data related to past occurrences to determine causes, effects, or trends of those events that may help explain present events and anticipate future events.
  • History, threat : A specific event occurring between the first and second observation that may impact the dependent variable score change. This may be a threat to the internal validity of the findings of a study.
  • Holistic approach : Seeking patterns that provide an overall understanding of the evaluation data, including and integrating the perspectives of different stakeholders. It is typically used in program evaluation studies.
  • Home interview : A series of questions asked to a research participant in the place where he or she resides rather than in an office or laboratory.
  • Home page : The main or first page of a website, typically with hyperlinks to the other pages on the site.
  • Homogeneity : A state or quality of being relatively similar or comparable in kind or nature. It refers to data sets, variability, samples, or statistical information.
  • Human participant research : Studies involving people. This type of study is an important part of social science research and has particular ethical requirements guiding its conduct.
  • Hypothesis : A tentative, testable assertion regarding the occurrence of certain behaviors or events; a prediction of study outcomes. It is used to determine how independent and dependent variables can be tested and can be written in either null or directional form. It is based on literature, theory, an educated guess, or observation of a phenomenon. It forms the basis of experiments designed to establish plausibility, association, prediction, or causality.
  • Hypothesis testing : A statistical procedure that involves stating something to be tested, collecting data, and making a decision as to whether the statement should be accepted as true or rejected.
  • Identifying information : Typically demographic or background data that includes an individual's identity, such as last name, address, social security number, or detailed family history. Researchers must be ethically mindful to keep this information confidential.
  • Idiographic research : Studies concentrating on specific cases and the unique traits or functioning of individuals rather than on broad generalizations about human behavior.
  • Idiosyncratic variation : Unusual, unexplained, or unexpected individual results in a study that are different than the general findings.
  • Impartiality : An inclination to weigh multiple views or opinions equally.
  • Implementation, threat : The possibility that results are due to variations in the way that the intervention is conducted in an intervention study. This may be a threat to the internal validity of the findings of a study.
  • Implied purpose : An intention behind the stated aim of the study. It is also called the unstated or implicit purpose.
  • In-depth interview : Using a structured, semistructured, or open-ended interview to collect data in a research study. These may include using standardized or nonstandardized questions, measurement scales, and so on. This is one of the main qualitative techniques used.
  • In-person survey : Asking study questions in a face-to-face manner with the respondent.
  • In situ : A naturally occurring situation or setting in a person's life. Research may be conducted in artificial settings, [Page 54] such as laboratories, or in situ, in participants’ natural environments.
  • In vitro : Occurring in the laboratory or outside the body.
  • In vivo : A Latin term meaning “in life,” this refers to occurrences or observations that take place in a participant's natural environment. In clinical medical studies, this refers to inside the body.
  • Incentive : A tangible or intangible reward designed to motivate persons or groups to participate as respondents or participants in a study.
  • Inception cohort : The group of research participants who are in the study at the beginning before any leave the study for any reason.
  • Incidence : The proportion of new cases of the target event or behavior in the population studied during a specified time interval.
  • Inclusion or exclusion criteria : These are the criteria that determine whether a person may or may not be allowed to participate in a research study. They include age, gender, disease type, concurrent disorders, treatment history, and so on.
  • Incremental effect : A small change noted that occurs over time in the outcomes during the course of a program or practice evaluation study.
  • Independent variable : The variable that affects or is presumed to affect the dependent variable under study and is included in the research design so that its effect can be determined. This is sometimes called the experimental, manipulated, or treatment variable, or in outcome studies the treatment or intervention. In a scientific experiment, you cannot have a dependent variable without an independent variable.
  • Indexing : The act of classifying information in order to make items easier to retrieve.
  • Indication : A medical term referring to a sign, symptom, or condition that leads to the recommendation of a treatment, test, or procedure.
  • Indicator : A measurable variable (or characteristic) that can be used to determine the degree of adherence to a standard or the level of quality achieved.
  • Indirect costs : Administrative expenses added to a grant's budget by the host organization. Indirect costs are used to support the operation of the host organization.
  • Inductive logic : Reasoning from the particular to the general, that is, from several similar studies to a theory.
  • Inductive or deductive theory, construction cycle : The way scientific knowledge is organized and gathered, with studies contributing to the creation of a theory and then that theory being applied and tested in later studies.
  • Inductive reasoning : The process of making inferences based on observed patterns or simple repetition.
  • Inferential statistics : Data analysis techniques for testing how likely it is that results based on a sample or samples are similar to results that would have been obtained for the entire population, for example, X 2 , t -, and F -tests.
  • Informal interview : A discussion used to gather data for a study. These are usually conducted by qualitative researchers. They do not involve any specific type of sequence of questioning, but resemble more the give-and-take of a casual conversation, to collect data in a more relaxed way.
  • Information retrieval : The science of searching for documents or data, for example, in PsycINFO or Google Scholar.
  • Information sharing : The reciprocal exchanging of data, as in reporting, publishing, or sharing a database.
  • Informed consent : The agreement of a person or his or her legally authorized representative to serve as a research participant with full knowledge of all anticipated risks and benefits of an experiment. It is a requirement of most human participant research ethics.
  • Institutional review board (IRB) : See IRB.
  • Instrument : Any device for systematically collecting data, such as a survey, test, protocol, observation, questionnaire, or interview schedule, and so on.
  • Instrument decay : Changes in measurement devices over time that may affect the results of a study.
  • Instrument development : To develop an instrument, scale, inventory, interview schedule, assessment tool, or way of measuring a phenomenon and test its utility for use by others.
  • Instrumentation : Measuring devices used in collecting data in a study. Many social research studies include multiple measures of various concepts.
  • Instrumentation error, threat : When changes in obtained measurement are due to the instrument calibration or changes in observers, judges, or interviewers (e.g., greater sensitivity with practice or less observer attentiveness after repeated observations). This may be a threat to the internal validity of the findings of a study.
  • Intangible costs : Costs that are not easily expressed in actual dollar amounts, for example, one's quality of life.
  • Intention-to-treat analysis : A method of evaluation for randomized trials in which all participants randomly assigned to one of the treatments are analyzed together, regardless of whether they completed or received that treatment, in order to preserve randomization.
  • Interaction with treatment effects, threat : The extent to which the intervention differentially affects the experimental participants based on their characteristics.
  • Interactive voice response : A computerized system that allows a person, typically a telephone caller, to select an option from an audio menu and otherwise interface with a computer system.
  • Intercoder reliability : The extent to which two or more coders agree on the coding of content variables.
  • Interjudge reliability : The consistency of a rating between two or more independent scorers, raters, or observers. It is also referred to as interrater reliability. This is a test that standardizes a measurement instrument.
  • Interlibrary loan services (ILL) : Services whereby a user of one library can borrow books or receive photocopies of documents that are owned by another library.
  • Internal consistency method : A procedure for estimating the reliability of scores using one administration of the instrument, such as Cronbach's alpha or split-half reliability. These tests help to standardize a measurement instrument.
  • Internal consistency reliability : The extent to which all items in a scale or test measure the same concept. This is a test that standardizes a measuring instrument.
  • Internal validity : The degree to which observed differences on the dependent variable are directly related to the independent variable, not to some other (uncontrolled) variable.
  • Interobserver reliability : The level of agreement between two or more people viewing the same activity or setting.
  • Interpretation of results : A researcher's explanation and description of what the findings of a study mean. [Page 58] Researchers use logic, other findings, literature, and empirical justification, typically, to interpret their results.
  • Interpretative question : An inquiry that has more than one correct answer.
  • Interpretative research : Studies in which researchers attempt to understand phenomena through accessing the meanings research participants assign to them. Interpretive methods of research start from the position that knowledge of reality, including the domain of human action, may be, at least in part, a social construction by human actors and that this applies equally to researchers.
  • Interpreter : A person who mediates between speakers of different languages. It is important to have qualified interpreters if one is working with bilingual clients who speak other languages in research studies.
  • Interpretive validity check : Inspecting accuracy in interpreting what is occurring to a participant and the degree to which the participant's views, thoughts, feelings, intentions, and experiences are accurately understood by the researcher. It is often used in qualitative research.
  • Interrater reliability : The extent to which two different researchers obtain the same result when using the same instrument to measure a concept.
  • Interrupted time series design : Research in which ongoing repeated measurements of the outcomes are made and treatment is introduced at some point, while measurements continue as before.
  • Interval measurement : When items have an equal distance between them on a measurement scale. This is the third level of measurement.
  • Interval measurement recording : An observational system that divides a predetermined period of time into a number of shorter sections of time. The observer records [Page 59] whether the targeted behavior occurred in each successive interval.
  • Interval scale : One measurement scale that, in addition to ordering scores from high to low, also establishes a uniform unit in the scale so that any equal distance between two scores is of equal magnitude. It has all of the properties of its two lower subscales, nominal and ordinal.
  • Interval variable : A quantitative ordering system in which the numerical differences between adjacent attributes are equal. The system includes a value of zero.
  • Intervening variable : An independent variable or extraneous variable that can influence the main treatment variable and the dependent variable.
  • Intervention : A specified, planned treatment or method that is intended to modify one or more dependent variables. It is referred to as the main intervention treatment or manipulated variable.
  • Intervention study, research : A general type of research in which independent variables (e.g., treatments) are manipulated in order to study the effect on one or more dependent variables (e.g., outcomes). It is typically used in practice or program evaluation studies.
  • Interview : A form of data collection in which individuals or groups are questioned orally for a survey. Interviews can occur face-to-face, over the phone, or via the Internet.
  • Interviewer : A person who conducts an interview, for example, a researcher, research assistant, or data collector.
  • Interviewer bias : Influence on the answers to questions caused by the presence, attitudes, or actions of the person asking the study questions.
  • Interviewer distortion : Information gathered by a researcher that is not accurate because of misunderstanding, misreporting, or incorrect observing by the researcher.
  • Interviewing : The process of questioning an individual to obtain specific information for a research study.
  • Intraobserver reliability : The reliability of responses obtained from the same observer at different time points.
  • Intrarater reliability : A type of reliability assessment in which the same assessment is completed by the same rater on at least two occasions.
  • Intrusiveness : The extent to which an intervention disrupts a research participant's normal life.
  • Inventory : Another way to describe a scale or measuring device that sets out to assess something.
  • IP address : A series of numbers that are assigned to devices participating in a computer network.
  • IQ : A score derived from one of several different standardized tests attempting to measure intelligence, for example, the Stanford-Binet Intelligence Test.
  • IRB (institutional review board) : Also known as an independent ethics committee (IEC) or ethical review board (ERB), an IRB is a committee that is designated to approve, monitor, and review biomedical and behavioral research involving humans with the aim to protect the rights and welfare of the research participants. It is a federal requirement in universities, large organizations, hospitals, and so on.
  • Item validity : The degree to which each of the questions in an instrument measures the intended variable. This is also referred to as face validity.
  • Iterative : The process of repeatedly applying a function to a series of elements in a collection or set until some condition is satisfied.
  • IVR (interactive voice response) : A phone technology used to provide an interactive set of menu options that a caller selects with a phone keypad, which then offers options for more information.
  • Jacobson's change index : A statistical formula developed by Jacobson and colleagues to assess change from pre- to posttest in clinical samples. It can be used with almost any measure on which one is assessing change. It is also called the Reliable Change Index (RCI).
  • Jargon : The specialized vocabulary or set of idioms shared by a particular profession or subgroup.
  • John Henry, threat : John Henry was a worker who outperformed a machine in an experimental setting because he was aware that his performance was being compared with that of the machine. This may be a threat to the internal validity of a study.
  • Journal : A library periodical that specializes in a specific participant area and publishes peer-reviewed articles.
  • Judgment sample : This is a purposive sample selected by the researcher highlighting certain characteristics of the population that will be targeted for the sample. This is a nonprobability-based sample.
  • Justification of a study : The rationale statement in a research report in which a researcher indicates why the research is being conducted and is important to conduct. This usually includes implications for theory and practice.
  • Kendall's Tau (τ) : A nonparametric statistic used to measure the degree of correspondence between two rankings and assessing the significance of this correspondence.
  • Key actors : See key informant.
  • Key concept : The essential idea contained within a published report, which may also be contained in specific key words.
  • Key informant : An individual who is targeted for data collection because his or her information will provide an accurate and relatively wide perspective on the setting, as well as lead to other sources of information. Key informants are often used in qualitative studies.
  • Key question : A major issue that needs to be addressed by a research study. Each research report purportedly answers a main driving question.
  • Key word : A word that helps to show what essential ideas are used in an article. These often are listed after the abstract in APA-style articles and are used to code the article in a database.
  • Kindling effect : Also called the kindling hypothesis, this refers to the repeated occurrence of smaller events over time, for example, stress, alcohol and drug use, and so on, that change or spark the brain's chemistry and impact a person's mental health in negative ways, for example, epileptic seizures, depression, or anxiety. This is opposite to a singular major event, for example, divorce or loss of one's job, which can impact a person similarly.
  • Knowledge-level continuum : The range of the amount of information that participants in a study are likely to have about a particular topic.
  • Knowledge question : An inquiry made by an interviewer to find out what factual information a respondent possesses about a particular topic.
  • Kolmogorav-Smirnov (K-S) test : A goodness-of-fit test that is used to decide if a sample comes from a population with a specific distribution. The test is based on the empirical distribution function (EDF).
  • Kruskal-Wallis one-way analysis of variance : A nonparametric inferential statistic used to compare two or more independent groups for statistical significance of differences.
  • Kuder-Richardson coefficient of reliability (KR-20) : A procedure for determining an estimate of the internal consistency of a test or other instrument from a single administration of the test without splitting the test into halves. The KR-20 is used for dichotomous data. This is a test that standardizes a measuring instrument.
  • Kurtosis : In probability theory and statistics, it is a measure of the peakedness of the probability distribution of a real-valued random variable. It presents a picture of the shape of a distribution. Higher kurtosis means more of the variance and is the result of infrequent extreme deviations, as opposed to frequent, modestly sized deviations.
  • Latent content : The underlying meaning of a communication. For instance, a professor might say, “You're being quiet today,” which is explicitly a description, but the latent content might be, “You ought to participate more.”
  • Leading question : An inquiry that suggests the answer or contains the information the examiner is looking for. This is also called prompting or spoon-feeding.
  • Level of confidence : The percentage of instances that a set of similarly constructed tests will capture the true mean (accuracy) of the system being tested, within a specified range of values around the measured accuracy value of each test. As a researcher performs more and more tests on a system, he or she becomes increasingly more confident in predicting the result of the next test.
  • Level of significance : The probability that a discrepancy between a sample statistic and a specified population parameter is due to sampling error or chance. The commonly used significance level in research is p < .05.
  • Life history, calendar : A structured interview utilizing a gridlike format to facilitate recall, on a month-to-month basis, of life events experienced by the interviewee. It can be used to chart behaviors or events for a study and is often used in qualitative research.
  • Likelihood ratio : The ratio of the maximum probability of a result under two different hypotheses. For instance, the likelihood that a given test result would be expected in a participant with the target disorder compared with the likelihood that this same result would be expected in a participant without the target disorder.
  • Likert scale : A self-report instrument in which an individual responds to a series of statements on a continuum by indicating the extent of agreement. Each choice is given a numerical value, and the score is presumed to indicate the magnitude of the attitude or belief in the question, for example, all of the time (4), most of the time (3), some of the time (2), and none of the time (1).
  • Limitation : An aspect of a study that the researcher knows may influence the results or generalizability of the results negatively but over which he or she has little control. These are to be acknowledged in the discussion subsection of a research report. All research has limitations.
  • Linear relationship : A statistical correlation in which an increase in one variable is associated with a corresponding increase in another variable, and a decrease in one variable is associated with a corresponding decrease in another variable.
  • Literature review : The systematic identification, location, and analysis of documents containing information related to a research problem. This is usually the subsection before the method, and after the introduction, in a research article. It provides a context for the study.
  • Location, threat : The possibility that results are due to characteristics of the setting or location in which a study is conducted.
  • Logbook : Recording information in the form of chronological events in field books, logbooks, case notes, or diaries. These may include pictorial, printed, or audiovisual notations. This is one of the main qualitative techniques used.
  • Logic : The principle that guides reasoning within a given field or situation. Two main forms of logic prevail in research: inductive, or moving from particular cases to a theory; and deductive, or applying theory to the particular.
  • Logic model : Developed by the W.K. Kellogg Foundation, this is a way to describe a theory-based rationale for a program and its inputs, activities, outputs, and outcomes. It is typically used in program evaluation studies.
  • Logistic regression, statistical : It is part of a category of statistical models called generalized linear models. It allows one to predict a discrete outcome, such as group membership, from a set of variables that may be continuous, discrete, dichotomous, or a mix of any of these. Generally, the dependent or response variable is dichotomous, such as presence versus absence or success versus failure.
  • Longitudinal case study design : A way of organizing developmental research in which a case or group is studied repeatedly over a period of days, weeks, months, or years.
  • Longitudinal one-group posttest-only design : A way of organizing developmental research in which one collection of participants is studied repeatedly over a period of days, weeks, months, or years.
  • Longitudinal survey : A study in which information is collected at different points in time in order to note changes over time. These studies are usually of considerable length, such as several months or years.
  • Magnitude recording : A list of data that describes the amount, level, or degree of the target problem during each occurrence.
  • Mailed survey : A questionnaire sent to research participants that they are asked to fill out and to return to the sender, either by post or e-mail.
  • Management information system (MIS) : A system that captures data and turns it into something that is meaningful to help manage activities or decisions.
  • Manifest coding : A form of coding raw data and transforming it into processed data. It is used in content analyses in which a researcher will count the number of times a particular word or phrase appears in the text or video. It is a reliable method because the word or phrase either appears or does not appear; however, it does not attach meaning to the word or phrase being counted.
  • Manipulated variable : See experimental variable.
  • Manuscript : A copy of an article or text before it has been printed or published.
  • Marginal costs : The costs of producing one additional unit of output in a program. Its actual value in dollars is the marginal value. They are typically used in program evaluation studies.
  • Marketing research : An objective approach to developing studies to answer business management decision-making questions based on target populations.
  • Matched pairs : Study participants grouped in twos with the comparison group participants. This helps to ensure that the two groups are similar in certain ways; for example, pairs could be matched for age, race, and gender.
  • Matrix display : A rectangular array of quantities or expressions set out by histograms, bar graphs, or rows and columns.
  • Matrix questions : A set or series of questions that share answer choices laid out on a grid.
  • Maturation, threat : The possibility that results are due to changes that occur in participants as a direct result of the passage of time, human developmental processes, or fatigue and that may affect their performance on the dependent variable. This may be a threat to the internal validity of the findings of a study.
  • Mauchly's sphericity test : This is used to assess the variances in repeated measures ANOVAs (F-tests). Sphericity tests whether there is equality of variances of the differences between levels of the repeated measures factor. If this test is not significant (p > .05), it is reasonable to conclude that the variances of differences are not significantly different, and thus, the F -test can be trusted.
  • Mean, arithmetic : The sum of the scores in a distribution divided by the number of scores in the distribution. It is the most commonly used measure of central tendency. It is often reported with its companion statistic, the standard deviation, which shows how far things vary from the average.
  • Meaning unit : The smallest bit of language or numerical information needed to make sense. In the English language, cat is a meaning unit; ca is not.
  • Measurability : The quality of being able to be quantified and assessed.
  • Measurement : The act or process of assigning numbers to phenomena according to a system of valuation.
  • Measurement error : A degree of inaccuracy in results due to flaws in the measuring instrument.
  • Measurement instrument : The instrument inventory, scale, or tool used to translate a construct into observable data.
  • Measurement level : There are four hierarchal levels of measurement. From lowest to highest, these include the following: nominal, or naming variables; ordinal, or rank-ordered variables; interval, or evenly spaced variables; and ratio, or evenly spaced variables where zero has an absolute value. Each level assumes all the characteristics of the ones below.
  • Measurement of the dependent variable, threat : The extent to which the generalizability of the study's results are limited to the particular dependent measure used. This may be a threat to the external validity of a study's findings.
  • Measurement scales : See measurement level.
  • Measures of central tendency : A set of descriptive statistics, including frequency, range, percentage, mean, median, mode, variance, and standard deviation of a data set. These are used to find an average, or central tendency of a data set, which refers to a measure of the middle or expected value of the data set. Collectively, they provide a picture of the data set.
  • Measures of variability : Indices indicating how spread out the scores are in a distribution. Those most commonly used in social science research are the range, standard deviation, and variance.
  • Measuring instrument : An actual measurement device— observation schedule, diary, logbook, scale, or questionnaire— that shows the extent or amount or quantity or degree of something being assessed.
  • Mechanical matching : A process of using a computer to pair two research participants whose scores on a particular variable are similar.
  • Median ( Mdn ) : The midpoint or number in a distribution having 50% of the scores above it and 50% of the scores below it.
  • Mediator variable : This is a variable that describes how rather than when effects will occur by accounting for the relationship between the independent and dependent variables. A mediating relationship is one in which the path relating A to C is mediated by a third variable (B).
  • Member check : A qualitative research technique in which the interpretation and report (or a portion of it) is given to members of the sample (informants) in order to check the authenticity of the work. Their comments serve as a check on the viability of the interpretation. There are many subcategories of member checks, including narrative accuracy checks, interpretive validity, descriptive validity, theoretical validity, and evaluative validity.
  • Memory question : An inquiry that gathers information about a research participant's recollection of the past.
  • Meta-analysis : A systematic review that uses quantitative methods of published research interventions and studies to synthesize and summarize the results of a large numbers of research studies on one particular topic. This allows aggregate claims about interventions and their effects to be made and offers empirical suggestions about best practices of interventions. The unit of analysis in meta-analysis is the effect size found in different studies.
  • Methodology : The technique used to conduct a study. This is described in detail in a research article. Methodologies are also assessed for their ability to be replicated.
  • Minority group : People distinguished by being on the margins of power, status, or the allocation of resources within a society; a nonmainstream group in a society or research study. It also includes members of a numerically inferior group.
  • Mirror sample : A technique used when the researcher suspects that respondents may not tell the truth about their behavior. Instead of asking about the respondent directly, the researcher asks each respondent to think of a close friend of the same sex and age-group. Then he or she asks them to answer for that friend without naming the friend. The theory is that, because they don't know all the details about their friends, they'll really describe their own behaviors. Mirror samples have been used in surveys on sexual behavior and on studies about AIDS. This technique is similar to a third-person technique.
  • Mixed-mode survey : A way of gathering information that uses multiple ways to obtain information from participants, including strategies such as face-to-face interviewing, online surveys, telephone surveys, and mail surveys.
  • Mixed research method : Research that combines elements of qualitative and quantitative approaches, for example, completing a study where the first part uses quantitative surveys and the second part uses qualitative interviews and then following up on participants to obtain more open-ended, impressionistic, or detailed data.
  • Mode ( Mo ) : The number that occurs most frequently in a distribution of scores or numbers.
  • Model : A symbolic representation of the interrelations of variables or factors within a system or a process. It is presented as a conceptual framework or a theory that explains a phenomenon and allows predictions to be made. It is often presented visually or pictorially in a research report.
  • Moderator variable : A variable that may or may not be controlled but has an effect on the independent or dependent variable.
  • Monolithic perspective : A perspective that sees the world only in one way.
  • Mortality, threat : The possibility that results of participants who are, for whatever reason, lost during the course of a study may differ from those who remain. Their absence has important effects on the results of the study. This may be a threat to the internal validity of the findings of a study. It is also referred to as sample attrition.
  • Multicultural research : A systematic study that includes an awareness of race, class, sexual orientation, and other justice issues that influence the way that studies are conducted. This research is interested in, and often critical of, how some systems of reasoning become favored over others.
  • Multidimensionality : Definitions, constructs, theoretical frameworks, variables, and measurement instruments based on more than a single knowledge area. In social research, these various knowledge areas usually include at least the biological, psychological, sociological, and cultural.
  • Multigroup posttest-only design : A research design that includes testing whatever intervention is being studied by giving the intervention and then testing after it is given in more than one group.
  • Multiple baseline design : A single-participant research design in which baseline data are collected on various behaviors for one participant, after which a treatment or intervention is given sequentially over a period of time to each behavior one at a time until all behaviors are under treatment. It is also used to collect data on different participants with regard to a single behavior or to assess [Page 72] sequentially treating a participant's behavior in different settings.
  • Multiple reality : A theory in phenomenological qualitative evaluation methods that says the world is unique, so people will see it in a number of different ways.
  • Multiple regression : A statistical technique using a prediction equation with two or more variables in combination to predict a criterion or dependent variable.
  • Multiple-treatment interference, threat : The carryover or delayed effects of prior experimental treatments when individuals receive two or more experimental treatments in succession. This may be a threat to the internal validity of the findings of a study.
  • Multiplicity : A large number or variety, usually referring to treatments or interventions.
  • Mutually exclusive events : Two events are mutually exclusive or disjointed if it is impossible for them to occur together.
  • N-of-1 randomized trials : In such trials, the participant undergoes randomized brief periods of real treatment versus an alternative or placebo treatment. The participant and researcher are blinded, if possible, and outcomes are monitored. Treatment periods are replicated until the researcher and participant are convinced that the treatments are definitely different or definitely not different. It is also known as the alternating treatments design.
  • Naming variables : Giving unique labels to variables so you can easily track them in data analyses.
  • Narration : Detailed accounts of individuals, events, themes, life histories, and their meanings. These are verbatim accounts from the voices of individuals to the researchers. This is one of the main methods of qualitative research.
  • Narrative accuracy, check : Taking a study participant's self-report and comparing it to objective factual records or data when possible.
  • Narrative data : Information gained by listening to a study participant's individual descriptions of an event and recording them verbatim. It is most often used in qualitative research.
  • Narrative history : Giving an account of events in a story- based form. It is typically used as a qualitative research technique.
  • Narrative interviewing : A technique involving asking research participants open-ended questions and then recording their answers as they state them to the researcher. It is used primarily in qualitative research.
  • Narrative review : Opposite of a systematic review, a narrative review is an overview of a person's participative experiences with something that is compiled by a researcher. It is used primarily in qualitative research.
  • Natural language : A collection of words, phrases, or idioms that are used by people for general-purpose communication, rather than formal or technical communication.
  • Naturalistic observation : Observation in which the observer controls or manipulates nothing and tries not to affect the observed situation in any way.
  • Naturalistic research : Studies that observe participants in the environments in which they usually live, rather than in a laboratory setting. This is referred to as in vivo or in situ.
  • Naturally occurring settings : Places to research that are not set up or contrived by the researchers but instead already exist and are found and studied by researchers, for example, playgrounds, homes, agencies, churches, or parks.
  • Naturalness : The quality of faithfully representing nature or life.
  • Nazi medical experiments : Unethical human experiments conducted by the Nazi party in World War II Germany, which led to stricter international ethical guidelines for researchers worldwide.
  • Needs assessment : Research or evaluation that seeks to empirically determine what needs are experienced by a client, group of clients, agency, or community. Needs assessment may demonstrate whether a program or intervention is necessary to help solve a problem.
  • Negative correlation : As the value of one of the variables increases, the value of the second variable decreases. Likewise, as the value of one of the variables decreases, the value of the other variable increases. This is also called an inverse correlation and ranges from −1.00 to 0.
  • Negative (deviant) case analysis : Looking for and talking about information found in a research study that does not seem to support the overall pattern found by the analysis. This is a way to continue to work with data until it can explain the anomaly.
  • Negative predictive value : Proportion of people with a negative test who are free of the target disorder. See also likelihood ratio.
  • Negatively skewed distribution : A distribution in which there are more extreme scores at the lower end than at the upper or higher end.
  • Net social value : The economic value of a project or program once net present (discounted) costs have been subtracted from net present (discounted) benefits. It is typically used in program evaluation studies.
  • Neutral location : A place where power distribution is equal between the people who meet there. For instance, meeting at a professor's office is not neutral, while meeting at a coffeehouse or public park is more neutral.
  • Nominal measurement : Naming variables, placing them into categories, and assigning numerical values to them for research purposes. The numbers do not reflect anything more than the simple naming of the variable, for example, male = 1 and female = 2. This is the first level of measurement. The three other scales are ordinal, interval, and ratio.
  • Nominal scale : A measurement scale that classifies elements into two or more categories, the numbers indicating that the elements are different but not according to their order or magnitude (e.g., yes or no; agree or disagree; male or female; or White, Black, or Hispanic).
  • Nominal variables : Data that are not numbers but are classified using numbers to name them, such as female and male coded as 1 and 2 for data analysis purposes.
  • Nondirectional hypothesis : A statistical prediction that no relationship exists between two variables (usually) without specifying its exact nature; for example, there is no relationship between one's shoe size and hat size. It is also called the null hypothesis.
  • Nonequivalent control group design : A quasi-experimental design involving at least two groups, both of which may be pretested. One group receives the experimental treatment, and then both groups are posttested. Individuals are not randomly assigned to treatments or conditions, [Page 76] and thus the groups may be different in their makeup or size.
  • Nongeneralizability dependent variable, threat : This is when the instrument used to measure the dependent variable is not representative of the population of such measures. For example, two anxiety scales may give different results for the same sample. This may be a threat to the external validity of a study's findings.
  • Nonoccurrence data : Information documenting that something has not happened in a study.
  • Nonparametric tests, techniques : A body of statistical tests used when the data represent a nominal or ordinal level scale, or when assumptions required for parametric tests cannot be met, specifically in cases of small sample sizes, biased samples, an inability to determine the relationship between sample and population, and unequal variances between the sample and population. This is a class of tests that do not hold the assumptions of normality.
  • Nonparticipant observations : Observations in which the observer is not directly involved in the situation to be viewed or studied. The researcher tries to stay in the background to be relatively detached from the study.
  • Nonprobability sample : A subgroup of study participants is selected by the researcher, and it is uncertain if they represent the larger population. These samples can be divided into five main types: availability, convenience, quota, judgment, and purposive.
  • Nonprobability sampling : Collecting a group of participants in a specific way to study without randomizing their selection.
  • Nonrandom sampling : The selection of a sample in which every member of the population does not have an equal chance of being selected for a study.
  • Nonreactivity measuring instruments : Those measures in which a participant's behavior is not influenced by social interaction with the researcher. This refers to the ultimate outcome of an instrument and not the means by which it is achieved.
  • Nonsystematic observations : Views that have not been researched in a formal or systematic way but are the result of using information as it is presented.
  • Nonverbal response : Body language, such as looking down at the floor, that communicates something to an observer, even when no words are spoken.
  • Norm : A descriptive statistic that summarizes the test performance of a reference group of individuals and permits meaningful comparisons of individuals to the group.
  • Norm group : A sample group used to develop norms or baseline data for an instrument, scale, or inventory.
  • Norm-referenced instrument : An instrument that permits comparison of an individual score to the scores of a group of individuals on that same instrument or measurement.
  • Normal curve : A graphic illustration of a normal distribution. See normal distribution.
  • Normal distribution : A theoretical bell-shaped distribution having a wide application to both descriptive and inferential statistics. It reflects the distribution of many human characteristics in typical populations. Measures of characteristics of most individuals and events fall under the higher central part of the bell-shaped curve. The smaller ends of the curve are referred to as the low-chance probability areas.
  • Normative data : The average for any given test, which helps researchers to understand experimental data to the extent that it differs from what is deemed normal or expected.
  • Novelty effect, threat : The responses of a study may be partly a function of the newness of the experimental approach. This is a threat to the external validity of the study's findings.
  • Null hypothesis : A statement that any difference between obtained sample statistics and specified population parameters is due to sampling error or chance. The researcher assumes the testing position that there will be no difference between variables being studied.
  • Number needed to harm (NNH) : The number of participants that, if all received the experimental treatment, would result in one additional participant being harmed compared with participants who received the control treatment, calculated as 1/ARI (absolute risk increase) and accompanied by a 95% confidence interval (CI).
  • Number needed to treat (NNT) : The inverse of the absolute risk reduction (ARR) and the number of participants that need to be treated to prevent one bad outcome. It is calculated as the inverse of the absolute risk reduction (1/ARR).
  • Nuremburg Code : A set of research ethical principles that came as a result of the 1947 Nuremburg Trials at the end of World War II. Written because of the Nazi medical experiments, they are still used in many parts of the world.
  • Obedience to authority experiments : Studies that consider how people respond to being told what to do by a person who they think has power or authority.
  • Objective : A statement of intended outcomes to achieve goals of an intervention or program. They are ideally focused, time framed, and measurable, with some projection of when they will be actually achieved. They are typically used in program evaluation studies.
  • Objectivity : A presumed lack of bias or prejudice. In social science, it should be declared and verified only when it is checkable.
  • Observation : Using one's human senses to obtain information about something. It typically involves systematically recording an event, person, behavior, or phenomenon.
  • Observational data : Data obtained through direct firsthand observation by the researcher.
  • Observational study : A study in which the researchers do not seek to intervene; they simply observe the course of events. Changes or differences in one characteristic (e.g., whether people received the intervention of interest) are studied in relation to changes or differences in another characteristic (e.g., whether they died) without any action by the investigator.
  • Observed frequencies : The statistical occurrence of variable scores or values. This is usually used in contrast to expected frequencies, which are how often one would expect something to occur, given previous data, trends, or observations.
  • Observer : A person who watches but does not participate in what is being studied.
  • Observer bias, threat : The possibility that an observer does not observe phenomena objectively or accurately, thus producing suspect or invalid observations. This is deemed a threat to the internal validity of a study.
  • Observer-participant : A researcher who actually takes part in what he or she is researching, such as someone researching support groups and learning about them while participating in one. This is also referred to as a participant-observer. The technique is used frequently in qualitative research studies.
  • Observer reliability : Ability of an observer or many observers to consistently measure things the same way repeatedly. If this occurs across observers with the same conclusion of the observation, then it is referred to as interrater reliability.
  • Obtrusive data collection, threat : Gathering information in a way that causes difficulty or distress for participants. This may be deemed a threat to the internal validity of a study.
  • Odds : A ratio of the number of people experiencing an event to the number of people who don't experience that event.
  • Odds ratio (OR) : The ratio of the odds of having the target disorder in the experimental group relative to the odds in favor of having the target disorder in the comparison or control group (in cohort studies or systematic reviews). It is also the odds in favor of being exposed to participants with the target disorder divided by the odds in favor of being exposed to control participants (without the target disorder).
  • Omnibus survey : A method in which information on a wide range of participants is collected during the same interview, for example, a consumer marketing survey of many products and their uses.
  • On-task behavior : When a person is engaged in or working on a specific task or activity in a study.
  • One-group posttest-only design : A preexperimental design involving one group that is given a test after treatment is given. It attempts, therefore, to evaluate a program's [Page 81] outcomes when no available comparison group and no pretest data are available (or needed, as in a client satisfaction study).
  • One-group pretest-posttest design : A preexperimental design involving one group that is pretested, exposed to a form of treatment, and then posttested.
  • One-shot case study design : See one-group posttest-only design.
  • One-tailed hypothesis : A statement of the initial expectations of what will happen in an experiment that predicts one possible outcome. This is differentiated from a two-tailed hypothesis, in which a researcher states that the dependent variable might change in either one direction or another but does not specify which way.
  • One-tailed test of statistical significance : The use of only one tail of the sampling distribution of a statistic when a directional hypothesis is stated, for example, a one-tailed t -test.
  • Online survey : A collection of structured questions and responses that are distributed by e-mail or over the Internet more broadly. A frequently used tool is a software program called Survey Monkey.
  • Ontology : The nature of reality and how one views and interprets that reality. This is often an assumption that underpines a research study of methodological approach.
  • Open-ended question : A question giving the responder complete freedom for any response; for example, “What do you like about research?”
  • Operating costs : Costs associated with items that contribute to the operation of a program. They are typically used in program evaluation studies.
  • Operational definition : Defining a term by stating the actions, processes, parameters, or operations used to materially measure or identify examples of it.
  • Operationalization of variables : Defining the parts of a study in concrete and usually numerical ways so that one can consistently know what one is studying. For instance, if depression is a variable, this might be operationally defined as participant scores on a validated measure of depression. This is important for replicating a study as well, allowing people to define the variable in the same way that the original researchers did.
  • Operations research : Mathematical, economic, or scientific analysis of a process or operation used in making decisions. It is often used for the analysis of problems in business and industry involving the construction of models and the application of linear programming, critical path analysis, and other quantitative techniques.
  • Opinion question : A question a researcher asks to find out what people think about a topic.
  • Opportunity costs : The cost that is equivalent to the next-best economic activity that would be forgone if a project or program proceeds. They are typically used in program evaluation studies.
  • Oral history : The recording, preservation, and interpretation of historical accounts and information based on the personal experiences and opinions of the participant. This can be gathered through interviews, poems, music, accounts of events, the study of myths, and so on.
  • Ordinal measurement : Assigning numbers to objects representing their rank ordering (first, second, third, etc.) of the entities measured. This is the second level of measurement, one up from the first, nominal level.
  • Ordinal scale : A measurement scale that ranks or orders individuals in terms of the degree to which they possess a characteristic of interest on which they can be ranked, for example, A students versus B students versus C students.
  • Ordinal variable : See ordinal scale.
  • Ordinary least squares (OLS) : An estimation technique for statistical regression analysis. Estimates are used to analyze both experimental and observational data.
  • Original data : The raw data from an experiment that has or has not been analyzed. It is often referred to as the raw data or existing data set.
  • Outcome evaluation : Researching to measure practice or program effectiveness. Such studies examine what has changed as a result of the intervention being offered.
  • Outcome measures : Specific standardized or nonstandardized benchmarks used to assess whether the intervention or program resulted in any changes.
  • Outcome variable : See dependent variable or criterion variable.
  • Outlier, statistical : An observation in a data set that is far removed in value from the others in the set. It is an unusually large or unusually small value compared with the others. It might be the result of an error in measurement, in which case it will distort the interpretation of the data, having undue influence on many summary statistics, for example, the mean. All outliers should be scrutinized before data analysis.
  • Outside observer : A person other than the principal researcher who is often invited to repeat the measures completed by the researcher to guard against researcher bias.
  • Outsider-within : A social group's placement in specific, historical context of race, gender, and class inequality that might influence its point of view on the world.
  • Overgeneralization : Assuming that something will occur in the same way as it did before without really establishing a pattern, for instance, flipping a coin, and when it lands on heads, assuming it will do the same the next time. In reality, however, it is a 50–50 chance for each independent coin toss.
  • Overview : See systematic review.
  • Panel group : A group of persons who provide information or feedback to the researchers at different points of study. It is typically used in program evaluation studies.
  • Paradigm : This is the way a person sees and interprets the world according to a particular belief system.
  • Parallel-forms reliability : The degree of similarity attained by putting together two tests with the same content and running them both at the same time to see if they measure things the same way. This is a test that standardizes a measuring instrument.
  • Parameter : A numerical index describing a characteristic of a population, for example, its variance, standard deviation, and so on.
  • Parametric statistical test technique : A test of significance, appropriate when the data represent an interval or ratio scale of measurement and when other specific assumptions have been met, specifically that the sample statistics relate to the population parameters, that the variance of the sample relates to the variance of the population, and that the population has normality.
  • Parsimony : A philosophical principle preferring simplicity and succinctness in explanations. It is commonly stated as: “the simplest adequate explanation is usually the more correct one.”
  • Partial correlation : A method of controlling the participant characteristic threat in correlational research by statistically holding one or more variables constant and then analyzing the data.
  • Participant : An individual or subject who is studied in research, often but not necessarily a student, patient, or client.
  • Participant observations : Observations in which the researcher actually becomes a participant in the situation to be observed. This is one of the main methods of qualitative research. See observer-participant.
  • Participant reality : However correct or biased, the way that a person actually sees or perceives the world. This is in contrast to objective reality, which is a concrete reality that would be available for everyone to observe and check.
  • Participant selection bias, threat : The possibility that characteristics of the participants selected in a study may account for observed relationships, not the independent–dependent variables.
  • Participants, research : Any subjects of a study.
  • Participatory action research (PAR) : A form of inquiry focusing on the effects of the researcher's direct actions of practice within a participatory community, with the goal of improving the performance quality of the community or an area of concern. It involves all relevant parties examining together current action (which they experience as problematic) in order to seek change and improve it. They do this by critically reflecting on the historical, political, cultural, economic, geographic, and other contexts in [Page 86] trying to understand the issues needing change, followed by action steps.
  • Participatory evaluation : An evaluation model organized as a team project in which the evaluator and representatives of one or more stakeholder groups work collaboratively in developing the evaluation plan, conducting the evaluation, or disseminating and using the results.
  • Path analysis : A type of data analysis investigating linear and causal connections among correlated variables. It seeks to determine which variables influence others and in which ways.
  • Pearson correlation ( r ) : A common index of correlation appropriate when the data represent either interval level or ratio scales. It takes into account each and every pair of scores and produces a coefficient between 0.00 and ±1.00. A positive r indicates that, as one variable goes up or down, so does the other. Negative or inverse r indicates that, as one goes up, the other goes down.
  • Peer review : Offering research methods, studies, or scholarly work to scientific scrutiny and critique by selected other experts in a respective field of study.
  • Percentage distribution : An actual frequency distribution in which the individual class frequencies are expressed as a percentage of the total frequency, equated to 100%.
  • Percentile rank : An index of relative position indicating the percentage of scores that fall at or below a given score.
  • Performance appraisal : A specific method by which employees are evaluated.
  • Performance management : Organizational management that relies on evidence about policy and program accomplishments to link strategic priorities to outcomes and make decisions about current and future directions.
  • Performance measures : The process of designing and implementing quantitative and qualitative measures of program results, including outputs and outcomes.
  • Periodical : A magazine, newspaper, journal, or other type of written work that is published in a chronological series.
  • Personal experiences, participant observations : A set of research strategies that aim to gain a close and intimate familiarity with a given group of individuals (such as a religious, occupational, or subcultural group, or a particular community) and their practices through an intensive involvement with people in their natural environment, often, though not always, over an extended period of time. This is typically used in qualitative research studies.
  • Personal identification number (PIN) : A number assigned to a participant to maintain anonymity when collecting or analyzing data.
  • Personal journal, notes : Taking case notes about a situation being studied.
  • Phenomenology : The study of reports of structures of consciousness, like thoughts as experienced from the first-person point of view. This is one of the main methods of qualitative research.
  • Philosophical assumptions : Beliefs that are inherent in research methodologies but that are not always explicit.
  • Pie chart : A circular graphic illustration of the breakdown of data into categories, usually using percentages representing spokes in a wheel or wedges.
  • Pilot study : A small-scale study administered before conducting an actual study. Its purpose is to reveal limitations in the research method or intervention to be amended to create a better study, or to demonstrate the potential usefulness of an intervention.
  • Placebo : An inactive treatment or procedure, literally meaning “I do nothing.” The placebo effect (usually a positive or beneficial response) is attributable to the participant's or experimenter's expectation that the treatment will have an effect.
  • Placebo effect, threat : When something studied seems to have an effect even though it is given as a control. For instance, if college students were in a study where they thought they might be drinking alcohol and they felt drunk, even though it turned out that their drinks were nonalcoholic, this would be the placebo effect. This is a possible threat to the internal validity of a study.
  • Plausible rival hypothesis : An intervening, antecedent, or other independent variable shown by evidence or judgment to influence the relationship between either an independent and dependent variable or a program and its intended outcomes.
  • Point estimates : The use of sample data to calculate a single value (known as a statistic) that is to serve as a best guess for an unknown (fixed or random) population parameter.
  • Poisson distribution : A statistical distribution with known properties used as the basis for analyzing the number of occurrences of relatively rare events occurring over time.
  • Poll : A survey of the public or a sample to acquire information about participants, for example, political surveys before elections, census data, and so on.
  • Population : The total group to which the researcher would like the results of a study to be generalized. It includes all individuals with certain specified and identified, or universe, characteristics of a defined class of people, facts, or events.
  • Population generalizability : The extent to which the results obtained from sample data are generalizable to a larger group.
  • Positive correlation : A correlation in which, as one variable goes up or down, so does the other one. Its value ranges from 0 to 1.00.
  • Positive predictive value : The proportion of people with a positive test who have the target disorder.
  • Positively skewed distribution : A frequency distribution in which there are more extreme scores at the upper or higher end than at the lower end.
  • Positivistic research approach : The perspective that the tools used by science to study biological, physical, and other aspects of the natural world can be usefully applied to study human phenomena. Theological or metaphysical explanations are avoided in preference to naturalistic accounts.
  • Postmodernism : The theory that reacts against earlier modernist principles, by reintroducing traditional or classical elements of style or by carrying modernist styles or practices to extremes. It emphasizes the role of language, power relations, and motivation; in particular, it attacks the use of sharp classifications, for example, male versus female, straight versus gay, old versus young, white versus black, and so on. Whereas modernism was often associated with identity, unity, authority, and certainty, postmodernism is often associated with difference, separation, textuality, and skepticism.
  • Posttest : A test given after an intervention to determine if and how participants may have been affected by the intervention.
  • Posttest odds : The odds that the participant has the target disorder after the test is carried out, calculated as the pretest odds multiplied by the likelihood ratio.
  • Posttest-only control group design : A research design involving at least two groups or participants. One group receives [Page 90] a treatment, the other receives no treatment, placebo treatment, or an alternative treatment condition, and all groups are posttested. This design is a quasi-experiment if the groups are formed naturally. It is an experiment if the groups are formed using random assignment.
  • Posttest probability : The proportion of participants with that particular test result who have the target disorder, calculated as the posttest odds/[1 + posttest odds].
  • Power of a statistical test : The probability that the null hypothesis will be rejected when there is a difference in the populations. The greater the ability of a test to eliminate false hypotheses, the greater its relative power or its ability to avoid a Type II error.
  • Practical significance : A difference large enough to have some practical effect, contrasted with statistical significance, which may be so small as to have no practical consequences. It is sometimes referred to as clinical significance.
  • Practice evaluation : To assess the impact of a specific intervention on a target and outcome on a participant or participant group. It is conducted in partnership with participants and promotes ethical clinical practice.
  • Practitioner-researcher : A person or professional who conducts studies and also works in the field that he or she is researching at the same time. They often conduct empirical research about people they work with to direct and inform their practice.
  • Pragmatism : The belief that the significance of something is best understood in terms of its effects.
  • Prediction : The estimation of scores on one variable or event from information about one or more other variables or events.
  • Prediction equation : A mathematical equation used in a cause-and-effect study. For instance, 64% of adult individuals who take two aspirin will alleviate their headaches in 32 minutes.
  • Prediction study : An attempt to determine variables that are related to a dependent variable.
  • Predictive validity : The degree to which scores on an instrument predict characteristics of individuals in a future situation. This is a psychometric property of the instrument or scale.
  • Predictor variable(s) : The variable(s) from which projections are made in a causal study. They are normally independent variables.
  • Preexperimental design : A research design that involves studying only a single group of participants, either post-treatment only, or pre- and posttreatment. No control or comparison groups are used.
  • Pretest instrumentation : This refers to pilot testing the measurement instrument with a sample not used in the research study in order to correct any problems that it may have.
  • Pretest odds : The odds that the participant has the target disorder before the test is carried out, calculated as pretest probability/[1 – pretest probability].
  • Pretest-posttest control group design : A research design that involves studying at least two groups. All groups are pretested, then only one group receives a specified treatment, while the other group(s) receive no treatment, placebo treatment, or alternative treatment. Both groups are then posttested. If the groups are created using random assignment, this is an experimental study. If the groups are formed naturally, it is a quasi-experimental study.
  • Pretest probability prevalence : The proportion of people with the target disorder in the population at risk at a specific time (point prevalence) or time interval (period prevalence).
  • Pretest sensitization, threat : The pretest modifies the participant so that he or she behaves differently than unpretested participants. This may be a threat to the external validity of a study's findings.
  • Pretest study : An assessment test given before an intervention to determine the level on which a research participant falls before the intervention takes place.
  • Pretest treatment interaction, threat : The possibility that participants may respond or react differently to a treatment because they have been pretested. This may be a threat to the internal validity of the findings of a study.
  • Prevalence rate : The number of cases of a disorder, whether new or previously exiting, observed in a specified period of time.
  • Prevalence study : A type of cross-sectional study that measures the proportion of a population having a particular condition or characteristic, for example, the number of people in a town with a particular disease or illness.
  • Primary data : Information collected firsthand by the researcher for the study, for example, through surveys, observations, tests, interviews, and so on.
  • Primary research : The collection of data that does not already exist.
  • Primary sampling units (PSUs) : A cohort or subgroup of the sample selected from the sampling frame in order to conduct data analysis.
  • Primary source : Firsthand information, such as the testimony of an eyewitness, an original document, a relic, or a description of a study written by the person who conducted it.
  • Privacy : The ethical imperative for research participants to have their personal information protected. This is required by federal law when conducting research on human participants.
  • Probability : The relative frequency with which a particular event occurs among all events of interest.
  • Probability Proportionate to Size (PPS) sampling : A sampling technique where the probability that a particular sampling unit will be chosen in the sample is proportional to some known variable, such as population size or geographic size.
  • Probe question : A question asked during an interview to trigger answers and certain responses.
  • Problem statement : A statement that indicates the specific purpose of the research, its questions or hypotheses, the variables of interest to the researcher, and any specific relationships between those variables that is to be, or was, investigated. It usually also includes a description of the background and rationale (justification) for the study. Well-written research includes these at the very onset of the study.
  • Procedures : A detailed description by the researcher of what was (or will be) done in carrying out a study or administering study or research protocols.
  • Process evaluation : This is called a level one evaluation, typically required for evaluation and demonstration projects, conducted to understand what was learned during the implementation of the project. These evaluations seek to answer what happened to whom and how in the program. They inform others about what they might expect if they were to launch a similar project. They typically include program descriptions, program monitoring, and quality assurance. They are typically used in program evaluation studies.
  • Process recording : A method in which interview content is recorded by creating a typed or written record of all communication both verbal and nonverbal, in addition to a record of the participant's and researcher's feelings and reflections throughout the interview.
  • Professional culture : A specific collection of values and norms shared by people and groups in organizations that control the way they interact with each other and with other stakeholders outside the organization.
  • Professional journal : An academic periodical that presents in-depth, original research in a specific field. Its articles have been peer reviewed by other scholars in the field for scientific standards and validity. Professional journals may also contain professional or industry-related news, updates, or book reviews.
  • Professional judgment : The mechanism used to decide how to act in a work environment. Normally, these are expert collective opinions of persons trained in specific fields or disciplines.
  • Program : A large or small defined intervention or purposive activity intended to achieve specific goals with particular target populations of individuals.
  • Program activity : An intervention or the work done in any program that produces outputs and outcomes. It is typically used in program evaluation studies.
  • Program drift : When a program or its goals, interventions, or processes deviate over time from their original intent.
  • Program evaluation : This assesses the impact of specific programs on individuals by determining their goals, objectives, activities, outputs, and outcomes.
  • Program implementation : This is the process of converting program inputs into specific activities needed to produce outputs. It is typically used in program evaluation studies.
  • Program input : Any resource used by the program's activities to produce outputs. It is typically used in program evaluation studies.
  • Program objective : This is a statement of intended outcomes of programs, written so that they specify the target group, the size and direction of expected change, and the time frame for achieving results, written in a measurable form. It is typically used in program evaluation studies.
  • Program outcome : This is the result that occurs in a program, that which it is designed to achieve. It is typically used in program evaluation studies.
  • Program output : This is a targeted goal of the program that shows its effectiveness. It is achieved by the program activities and is typically used in program evaluation studies.
  • Program process : An activity offered at any phase of a program that produces program outputs. It is typically used in program evaluation studies.
  • Program rationale : This is the reason why the program or intervention was implemented or funded. It typically aligns itself with the priorities of the particular governing or funding body that sponsored it. It is typically used in program evaluation studies.
  • Projective device : An instrument that includes vague stimuli that participants are asked to interpret. There are no correct answers or replies. For instance, in the famous Rorschach test, participants are asked to say what the inkblot picture shown to them seems to resemble.
  • Promising practices : This is a new strengths-based term applied to best practices.
  • Prompt question : A specific question used by a researcher to obtain a particular response from a participant during [Page 96] an interview. Prompt questions help keep the interview on track. Sometimes called cued questions, they are used frequently in qualitative studies.
  • Proofing research reports : Editing and amending the various documents assembled as a research document.
  • Proportionate sampling : A technique of sampling from a population that uses a sampling fraction in each of the strata (smaller population or group), which is proportional to that of the total population.
  • Proposal : A detailed, short description of a proposed research study designed and formatted in a specific way to investigate a given problem.
  • Prospective research : A research strategy that follows people forward in time to examine the relationship between one set of variables and later occurrences; for example, a researcher can identify risk factors for diseases that develop at a later point in time.
  • Protocol : A plan or set of defined procedural steps to be followed in a study. This enhances uniformity of studies, methods, processes, or interventions.
  • Proximal similarity : The degree of similarity between the elements of a study and the external elements, where the researcher plans to apply the results of a study.
  • Pseudoscience : A belief, practice, or methodology that resembles science but lacks proper credibility, checkability, methodology, or supporting evidence.
  • Psychometric properties, instruments : The two key traditional concepts in classical test theory are reliability and validity. A reliable measure assesses something consistently, while a valid measure targets what it is supposed to measure. A reliable measure may be consistent without necessarily being valid; for example, a broken ruler may undermeasure [Page 97] a quantity by the same amount each time (consistently), but the resulting quantity is still wrong or invalid.
  • Publication : A written work that is reviewed by a scientific peer review panel, printed, and distributed to the public. Publications may range from professional newsletters, to technical reports, to scientific journals, and so on.
  • Publication bias : A tendency, on average, to produce results that appear significant because negative or near neutral results are not published as frequently.
  • Pure research study : A study conducted without any intended practical end, which instead is conducted to explore an idea or to create a theory. It functions to advance knowledge and generate new ideas.
  • Purpose of a study : A specific statement by a researcher of what he or she intends to accomplish. It can be specified as an explicit statement, research question, study hypothesis, or study objective.
  • Purposive sample : A nonrandom sample strategically selected by the researcher because prior knowledge suggests it is representative or because those selected have the needed information. This is a nonprobability-based sample.
  • Qualitative research : The systematic, firsthand observation of real-world phenomena. It has minimally three qualities: a focus on natural inquiry, reliance on the researcher as the instrument of data collection, and a report style focusing more on narrative data than on numbers; examples include ethnography, phenomenology, case studies, and so on.
  • Qualitative variable : A variable conceptualized and analyzed in qualitative studies. Typically, these variables have distinct [Page 98] categories, such as major or minor, and no real continuum is applied.
  • Quality assurance : A planned and systematic process that provides confidence in a product's or program's ability to serve its intended purpose. It is often used in evaluation research.
  • Quantification of variables : Measuring observations and experiences in numerical or empirical indicators, for example, 1 = female and 2 = male.
  • Quantitative data : Numerical data that either differ in amount or degree or are along a continuum from less to more.
  • Quantitative-descriptive design : To describe and quantify variables and relate them to each other so as to show to which variables impact others and why.
  • Quantitative research : Research that systematically explores, describes, or tests variables in numerical or statistical form.
  • Quantitative variable : A variable that is conceptualized and analyzed in numerical or statistical form. These variables are formed by using the hierarchical measurement scale of nominal, ordinal, interval, and ratio.
  • Quartile : In descriptive statistics, a quartile is any of the three values that divide the sorted data set into four equal parts so that each part represents one fourth of the sampled population. The first quartile (designated Q 1 ) is the lower and cuts off the lowest 25% of data (the 25th percentile); the second quartile (Q 2 ), or the median, cuts the data set in half (the 50th percentile); and the third quartile (Q 3 ) cuts off the highest 25% of data or lowest 75% (the 75 th percentile).
  • Quasi-experimental design : A type of research design in which the treatment and control or comparison groups are not created using random assignment procedures. It does involve the manipulation of an independent variable and the specification of a test hypothesis.
  • Question : An expression of inquiry that invites or calls for a reply. There are fixed choices, or closed-ended questions, such as, “Do you like milk, yes or no?” and open-ended questions, such as, “Tell me what you think about milk.”
  • Questionnaire : A spoken, written, or printed form used in gathering information on some participant or participants, consisting of a set of questions that assesses a phenomenon for research purposes.
  • Quota sampling : When the selection of the group of participants to be studied is made by the interviewer, who has specified quotas to fill from specified subgroups of the population. It can be either a probability- or nonprobability-based sample.
  • Random assignment : The process of assigning individuals or groups randomly to different treatment conditions for research purposes. This technique is a defining feature of nomothetic experiments.
  • Random digit dialing : A method for selecting people for involvement in telephone-conducted surveys by generating telephone numbers at random.
  • Random error : Problems in data or measurement caused by unknown and unpredictable changes in the experiment or participants. Confidence intervals ( CIs ) and p -values allow for the existence of random error but not systematic errors (bias).
  • Random numbers table : A specific table of numbers that provides a scientific means for random selection or random assignment of participants to a research study. These are usually appended in statistics textbooks.
  • Random sample, simple : A sample selected in such a way that every member of the population has an equal chance of being selected.
  • Randomization, random allocation : A method analogous to tossing a coin to assign clients to treatment groups. The experimental treatment is assigned if the coin lands heads, and a conventional, control, or placebo treatment is given if the coin lands tails.
  • Randomized controlled trial (RCT) : An outcome study wherein participants are randomly allocated to an experimental group or a control or comparison group and followed over time on the variables or outcomes of interest. RCTs are capable of high levels of internal validity.
  • Randomized cross-sectional survey design : Research that obtains data from a particular population at a single point in time by randomly sampling all members of that population.
  • Randomized one-group posttest-only design : A preexperimental design in which a randomly selected group of participants receives an intervention and then is tested afterward to determine their status; for example, a random selection of lawyers who completed a bar examination preparation class are assessed in terms of their overall pass rates.
  • Randomized study : A study in which group members are chosen from a list of possible participants in a way that ensures all members of the group have an equal chance of being selected.
  • Range ( Ra ) : The difference between the highest and lowest scores in a distribution; a main measure of variability.
  • Rank-ordering : Creating a logical relationship between a set of items such that, for any two items, the first is ranked higher than, ranked lower than, or ranked equal to the [Page 101] second. By reducing detailed measures to a sequence of ordinal numbers, rankings make it possible to evaluate complex information according to defined criteria.
  • Rapport : A harmonious interpersonal connection between two people. This is often important in conducting individual or group interviews with research participants.
  • Rasch analysis : A method for constructing a linear system from observed counts and categorical responses, within which items and subjects can be measured unambiguously.
  • Rate : This refers to how fast or frequent an event occurs, usually expressed with respect to time; for example, the mortality rate might be the number of deaths per year per 100,000 people.
  • Ratio measurement : A level of measurement with fixed ratios between units and with an absolute zero that is meaningful. For instance, savings is a ratio measurement, as zero savings means no savings. This is the highest level of measurement on the four-level scale, above nominal, ordinal, and interval. All the properties of the three lower level scales are also in this highest level scale.
  • Ratio scale : The highest measurement scale that, in addition to being an interval scale, also has an absolute zero in the scale.
  • Ratio variable : A measurement where the difference between two values is meaningful and in which there is a clear definition of zero. When the variable equals 0, there is the absence of that variable. Variables such as savings, pounds lost, or height gained are ratio variables.
  • Rationale : A logical statement of the reasons for conducting the study or taking a particular research approach. The rationale for the study is normally expressed in its introduction subsection. It is sometimes called the story behind the study.
  • Raw data : Data not yet processed for meaningful research or statistical use.
  • Raw score : The actual true score attained by an individual on the items in a test, inventory, scale, or other instrument.
  • Reactive arrangement or effect, threat : A way of conducting a study that may affect what is being studied. The very act of data collection can affect the items on which the researcher wants to collect data. For example, participants may tend to improve their performance just because they know they are part of an experiment. This may be a threat to the internal validity of a study.
  • Reactivity, threat : See reactive arrangement or effect, threat.
  • Real benefits : These represent the program benefits viewed as net gains to participants, communities, neighborhoods, or society. They are typically used in program evaluation studies.
  • Reality : The state of things as they actually exist and can be observed. Individuals may hold differing views of the same reality.
  • Record-keeping system : A systematic way of writing down or compiling data from a research study.
  • Recording error : A problem in the way data is written down, collected, transcribed, or stored.
  • Recording of data : Audiotaping, videotaping, writing down, or storing information from a research study.
  • Recruitment of participants : Finding ethical ways to obtain voluntary involvement from people for participation in a research study, while safeguarding their rights to participate or not participate.
  • Recursive causal model : A causal model that specifies oneway causal relationships among variables, for example, smoking and lung cancer.
  • Reductionism : This refers to breaking down components into logical and simple units so they may be described and studied by researchers.
  • Referee, external : An expert in a field who reviews a research manuscript and decides whether it will be published in a book or an academic journal. The review is normally conducted by an editorial review panel consisting of the author's peers in the field and done through a blind review system.
  • Reference : A source used to write an article, chapter, research report, or book. In the research literature, references are usually written in a particular style. For social science research, the style is almost always that published by the American Psychological Association (APA).
  • Reflective journal writing : The process of writing thoughts and emotions to sort through experiences in ways that emphasize metacognition, or thinking about how one thinks and feels. Research participants may be asked to keep such journals in a particular study. Sometimes referred to as reflexivity, this process is often used as a qualitative research study technique.
  • Regimen : A treatment plan (usually medical) specifying the dosage, schedule, and duration of the treatment or intervention.
  • Regression analysis : A general statistical technique that analyzes the relationships between a dependent (criterion) variable and a set of independent (predictor) variables. It seeks to determine what predictor or set of predictors influence the dependent variables.
  • Regression line : The line of so-called best fit for a set of scores plotted on coordinate axes or on a scatterplot. Using this line shows how far scores vary from each other and also allows one to see a visual pattern of the scores on the x-y axes.
  • Regression, threat : The possibility that results are due to a tendency for groups, selected on the basis of extreme scores, to regress toward more average scores on subsequent measurements, regardless of the experimental treatment given. This may be a threat to the internal validity of the findings of a study.
  • Regression to the mean, threat : A statistical phenomenon that can make natural variation in repeated data look like real change. It happens when unusually large or small measurements tend to be followed by measurements that are closer to the mean. This may be a threat to the internal validity of the findings of a study.
  • Relationship question : A question that explores a participant's connections to other individuals.
  • Relative benefit increase (RBI) : The proportional increase in rates of good outcomes between experimental (EER) and control participants (CER) in a trial, calculated as [EER – CER]/CER and accompanied by a 95% confidence interval (CI).
  • Relative risk increase (RRI) : The proportional increase in rates of bad outcomes between experimental (EER) and control participants (CER) in a trial, calculated as [EER – CER]/CER and accompanied by a 95% confidence interval (CI).
  • Relative risk reduction (RRR) : The proportional reduction in rates of bad outcomes between experimental (EER) and control participants (CER) in a trial, calculated as [EER – CER]/CER and accompanied by a 95% confidence interval (CI).
  • Relevance : The applicability of research to some real-world problem. The answer to the questions “So what?” and “Who can benefit from this study?” highlight the importance of this issue.
  • Reliability : The degree to which scores obtained by an instrument are repeated and consistent measures of [Page 105] whatever the scale, instrument, or inventory purports to measure. One assesses the reliability of an instrument, scale, or inventory to standardize the instrument and to scrutinize its psychometric properties.
  • Reliability coefficient : An index of the consistency of scores repeated using the same instrument. There are several methods of computing reliability coefficients depending on the type of consistency and characteristics of the instrument, for example, interrater reliability, split-half reliability, coefficient alpha, and so on.
  • Reliability measures : Systematic actions, such as taking several or multiple measurements on the same participants, intended to make sure that research measurements are being consistently reproduced.
  • Replacement sampling : A statistical term meaning that participants are chosen and then put back into the sample, so each unit is replaced before the next is sampled. This means that, ideally, one outcome does not affect the other outcomes. In a finite population, this distinction is important, although in an infinite population, it is not.
  • Replication : This refers to conducting a measurement, experiment, or study again; the second may be a repetition of the original study using different participants, or specified aspects of the study may be changed. If the study is repeated and produces the same findings, this enhances the validity and generalizability of the findings.
  • Representative sample : A group being studied who are measured to have demographic characteristics (e.g., sex, age, race, etc.) that match the population at large. For instance, if females make up 51% of a population, a representative sample would also have this percentage of females.
  • Representativeness : The extent to which a sample is identic al, in all characteristics, to the intended population. One [Page 106] has to inspect the selected sample and compare it with the population to determine this feature, not just assume it mirrors the population.
  • Research : The systematic investigation of a phenomenon. It is the process of searching, investigating, and discovering facts by using the scientific method. All research studies include, minimally, a specified purpose, a rationale for study, a specified method, analysis of data, and a conclusion.
  • Research committee : A group of selected experts who guide a researcher's work. These can be a planning committee, steering committee, or thesis committee.
  • Research consumer : A person who reads and applies research. Consumers are expected to appraise research methods and findings for their relevance.
  • Research design : The way that a research project is conducted. The main question answered here is, “What happened during this study?” This is one part of its overall method.
  • Research findings : The results of a research study.
  • Research hypothesis : A prediction of study outcomes. It is often a statement of the expected relationship between two or more variables based on testable assumptions, written in a directional or plausible form.
  • Research method : The main approach that will be taken to conduct the study, such as exploratory, experimental, descriptive, case studies, quantitative, qualitative, mixed methods, and so on.
  • Research participant : A person who is a participant in a study.
  • Research problem : A question that the researcher seeks to answer or a situation that a researcher wants to find out [Page 107] more about through a research study. The research problem is usually specified in the introduction of the research report and is often part of the rationale for conducting the study.
  • Research process : The progression of a research project: identification of the research problem, conducting the literature review, designing the study, gathering and analyzing the data, and publishing the results. This term is synonymous with the scientific method, which formally and minimally includes a statement of problem, literature review, method, results and findings, conclusions, limitations, and implications.
  • Research proposal : A detailed, short-form description of a planned study designed to investigate a given topic of study.
  • Research question : A situation that a researcher wants to find out more about through a research study. This is also a form of specifying the statement of purpose.
  • Research report : The final write-up of how a study was conducted, including these main subsections: abstract, introduction, method, and findings.
  • Researcher bias, threat : When the investigator conducting the study affects the results through his or her own preconception of what the study will find. This may be a threat to the internal validity of a study.
  • Researcher-practitioner model : A phrase used in the helping professions to describe professionals who base their practice decisions on empirical data and evaluate the outcomes of their own practices.
  • Respondent : A participant who fills out a survey or otherwise provides information to a researcher to study.
  • Respondent bias, threat : A type of response bias that can affect the results of a statistical survey if respondents [Page 108] answer questions in the way they think the questioner wants them to answer, rather than according to their true beliefs. This is also called subject expectancy. This may be a threat to the internal validity of the findings of a study.
  • Respondent-driven sampling : A technique for obtaining a sample that combines snowball sampling, with a mathematical model that weights the sample to compensate for the fact that the sample was collected in a nonrandom way. This is a form of nonprobability-based sampling.
  • Response bias set, threat : A type of response bias, such as acquiescence response error or social desirability response, in which a respondent replies to items in a multiple-choice questionnaire by choosing or avoiding certain response categories for reasons related or unrelated to their content or meaning. This may be a threat to the internal validity of a study's findings.
  • Response category : One of the alternatives from which a study participant must select in responding to a closed question, for example, true or false.
  • Response rate : The number of participants solicited for information who actually provided data. Oftentimes, the response rate is less than the actual number of respondents selected for a variety of reasons.
  • Results-based management : A philosophy of management that emphasizes the importance of intentional program or organizational results in managing the organization, its programs, and its people. It is typically used in program evaluation studies.
  • Results of a study : The findings presented offer the analysis of the data collected and generally include tables and graphs when appropriate.
  • Retrospective : Looking back at events that have already taken place.
  • Retrospective interview : A form of interview technique in which the researcher tries to get a respondent to reconstruct past experiences, for example, “Share what it was like when you were a teenager.” It is often used in qualitative research studies.
  • Risk ratio : The ratio of risk in the treated group (EER) to the risk in the control group (CER). This is used in randomized trials and cohort studies and is calculated as EER/CER. It is also called relative risk.
  • Rival hypothesis : This is an extraneous hypothesis that challenges the main independent–dependent variable relationship. For instance, score change in a study of student anxiety could be attributed to a student having a good night's sleep.
  • Sample : The selected subgroup of the population for the study.
  • Sample distribution : The actual distribution resulting from the collection of data. A major characteristic of a sample is that it contains a finite (countable) number of scores, the number of scores represented by the letter N. These numbers constitute a sample distribution.
  • Sample size : The actual number of people or units in the sample.
  • Sampling : The process of selecting a number of individuals (a sample) from a population scientifically in such a way that the individuals are typically representative of the larger group from which they were selected. These can be nonprobability- or probability-based samples.
  • Sampling bias : This is when either an intended or unintended disproportion of the sample is within the study. [Page 110] It is intended when researchers try to include individuals with certain characteristics to ensure they are in the study. It is unintended when sometimes, by chance, individuals are selected who do not reflect the population as a whole. Researchers must inspect their samples after data collection to assess the extent of such bias.
  • Sampling distribution : The set of values that one would obtain if one drew an infinite number of random samples from a given population and calculated the statistic on each sample. In doing so, all samples must be of the same size ( n ).
  • Sampling error : Expected chance variation in sample statistics that occurs when successive samples are selected from a population.
  • Sampling frame : The defined population from which a group of participants have been selected. For instance, if a researcher was studying students at a university, a list of those who attended that university would serve as the population sampling frame.
  • Sampling interval : The distance in a list between individuals chosen when selecting participants (sampling) systematically; for example, for every fourth person who enters the library, four is the interval used for sample selection.
  • Sampling ratio : The proportion of individuals in the population selected for the group being studied (sample) in systematic sampling, for example, 25% of all persons who attend the tailgate at a football game.
  • Sampling strategy : This is how a sample is selected for research or evaluation purposes. Two main strategies are probability and nonprobability samples.
  • Saturation : A mode of research immersion in which a researcher completely surrounds himself or herself with the participant and subject matter. This is one of the main [Page 111] qualitative techniques used. It also refers to the stage in data collection when new information merely replicates previously obtained data.
  • Scale : A scheme, inventory, rating form, or device by which some property, attribute, or behavior may be universally measured.
  • Scatterplot : The arrangement of points determined by the cross-tabulation of scores on coordinate axes ( x and y ) on a graph and used to represent and illustrate the relationship between two quantitative variables.
  • Scientific method : A way of exploring questions that includes, minimally, the following principles: a driving curiosity, systematic observations, a systematic method, logical inquiry, and objectivity. These are referred to also as the scientific tenets of research and evaluation.
  • Scientism : The view that scientific methods are capable of providing answers to all possible areas of human affairs. Scientism is usually repudiated by most social and behavioral scientists in recognition that science has little to offer when issues pertain to values and ethical questions.
  • Scope : The range, from high to low, in which a variable can be expressed or measured. For instance, if one is assessing teeth brushing behavior in young adults in the United States, one would choose a sample of such young adults instead of the population at large, limiting the scope of the project.
  • Scores : Actual numbers or results of a test or study.
  • Screening question : An inquiry that helps the researcher to see if a participant is suitable for a study. For instance, if one is seeking only people aged 18 to 21, then asking about their age before the study is a screening question to include only those participants.
  • Search engine : An online program used to find information by sorting information from large databases, for example, ProQuest, Jestor, and Google.
  • Search phrase : A word or group of words used to try to find information on a particular participant either in text or in electronic media, for example, in a search engine.
  • Search term : A key word from an article that often is listed in the abstract or used when searching for information.
  • Second-level coding : In qualitative data analysis, this involves describing what first-level coding themes, categories, or ideas mean, producing detailed examples from the transcript to back up each interpretation. First-level coding is a combination of identifying meaning units, fitting them into larger themes or categories, and assigning codes to them. In second-level coding, then, these codes are further reorganized and modified or subgrouped for subthemes.
  • Secondary account : Using information that comments on an event rather than records of the event itself collected in formal ways (e.g., clinical files) and less formal ways (unpublished reports). This is one of the main qualitative techniques used.
  • Secondary analysis : Any systematic examination of an existing data set that presents interpretations, conclusions, or knowledge additional to, or different from, those presented in the first report on the data collection and its results.
  • Secondary data : Information that was initially collected and possibly processed by people other than the researcher conducting the study. Common sources of secondary data for social science include census reports, large surveys, or organizational records.
  • Secondary research : Studies conducted using information that was initially collected and possibly processed by people other than the researcher conducting the study, for [Page 113] example, a meta-analysis of a group of studies examining a particular intervention.
  • Secondary source : Information received from someone who was reporting on something firsthand, such as a description of historical events by someone not present when the event occurred.
  • Secondhand data : Information that is received, taken from someone who did not have any firsthand experience of an event.
  • Selection bias, threat : This occurs with differential selection of participants for comparison groups. Score differences, consequently, can be attributed to pretreatment differences among groups. This may be a threat to the internal validity of the findings of a study.
  • Selection-maturation interaction, threat : When participant-related variables (selection threat) and time-related variables (maturation) interact. This may be a threat to the internal validity of the findings of a study.
  • Selection-treatment interaction, threat : The possibility that some characteristic of the participants chosen for the study interacts with some aspect of the intervention being studied. This may be a threat to the external validity of a study's findings.
  • Self-administered interaction : A clinical tool that study participants use by themselves, rather than the data collection being overseen by a researcher. For instance, a study participant might be asked to watch a video and report in confidence about it.
  • Self-administered questionnaire : A survey or form with questions that participants are asked to fill out on their own without the oversight of the researcher.
  • Self-censorship : The tendency of a research participant to avoid speaking freely, out of fear, as there will be some [Page 114] consequence for doing so. This is especially significant in terms of studies that research behaviors such as drug use, which are illegal.
  • Self-interest : The tendency for people to act in ways that will be of benefit to themselves in a study.
  • Self-observer : A research participant who describes his or her own experiences and perspectives rather than being watched by someone else, like the researcher.
  • Self-report : Information provided to the researcher from a research participant, which the participant obtained by observing and describing his or her own experiences rather than having someone else, such as the researcher, describe the behaviors.
  • Semantic differential scale : A type of rating scale usually having five to seven points, from high to low, designed to measure the connotative meaning of objects, events, and concepts. The connotations are used to derive the attitude toward the given object, event, or concept and are rated by the participant.
  • Semistructured interview : When a research participant is asked a series of mainly open-ended or prompt questions that he or she is free to talk about at some length.
  • Sensitivity analysis : A process of calculating the benefits and costs using a range of possible discount rates to see what the maximum and minimum net present benefit is for a project, program, or policy. It is typically used in program evaluation studies.
  • Sensitivity, cultural : The ability of a researcher to be aware and respectful of possible cultural differences in the study and to adapt the design to them.
  • Sensory question : A specific question asked by a researcher to find out what a person has seen, heard, tasted, touched, smelled, or experienced through his or her own senses.
  • Serendipitous information : Data that a researcher learns by chance during the course of a study that ends up being helpful to the study, although the researcher had not known that it would be when the study was designed.
  • Setting : The actual physical place where a study takes place. A study could take place in a laboratory, or it could be in situ, or in the natural environment where whatever is being studied occurs.
  • Sharing knowledge : Disseminating, presenting, or publishing information or making databases available to other individuals and researchers so that research can build on other findings.
  • Sharing of findings : An ethical responsibility to offer information that has been learned from the scientific and practice community, usually in the form of an article or presentation so that the research can be verified, corroborated, refuted, or challenged by more studies to advance knowledge.
  • Side effect : Any positive or negative unintended result of an intervention. Side effects are most commonly associated with drug testing studies.
  • Sieving procedures : Specific, detailed actions taken to systematically sort through information for research purposes; for example, every case file with a violent episode could be selected from all cases at an agency.
  • Simple frequency distribution : The number of times that a score actually appears in a data set. This is often depicted on a chart, table, or graph.
  • Simple random sample : A sample drawn in such a way that every item in the population has an equal and independent chance of being selected.
  • Simulation : Research in which an artificial situation is created and participants are told what activities they are to engage in.
  • Single system research (SSR) : A study conducted with one person, family, group, or system to explore the results of an intervention targeting specific outcomes. Typically repeated measures of client functioning are taken prior to intervention, during intervention, and perhaps after intervention is discontinued. These designs are a form of idiographic research—studies involving small numbers of participants—as opposed to nomothetic research, involving large numbers of participants. It is also known as single-participant or N = 1 research.
  • Skeptical curiosity : A state of mind in which a researcher thinks critically about claims made about reality but also continues to study research questions and to critique and examine new research as it becomes available.
  • Skewed distribution : A nonsymmetrical dispersion of scores in which there are more extreme scores at one end of the spectrum than the other (not a normal bell-shaped curve).
  • Skip pattern, questionnaire : Questionnaire logic designed to direct respondents to questions based on previous answers.
  • SnNout : When a sign, test, or symptom has a high Sensitivity, a N egative result can help rule out the diagnosis. For example, the sensitivity of a history of ankle swelling for diagnosing gout (accumulated uric acid) is 93%; therefore, if a person does not have a history of ankle swelling, it is highly unlikely that the person has gout.
  • Snowball sampling : Asking current research participants to ask other people whom they know to participate in a study so that what is at first a small group of participants becomes a larger one. This is a form of nonprobability-based sampling.
  • Social desirability : The tendency of people to answer questions in ways that are typically acceptable in a particular [Page 117] culture. This will generally take the form of overreporting good behavior and underreporting bad behavior.
  • Social service program : Organized work in health, education, and human service organizations intended to better the conditions of a community.
  • Socioeconomic status : A concept that implies a combination of at least two dimensions—social and economic. The economic dimension is often represented by money or wealth as reflected in employment income, home ownership, and other financial assets (e.g., pension plans, savings, property ownership). The social dimension incorporates education, occupational prestige, authority, and community standing. It is often a background and independent variable in studies that can be subgrouped for analysis.
  • Software program : A specific package that assists in data collection, analysis, reduction, or presentation of research studies.
  • Solomon four-group design : A research design that involves random assignment of participants to one of four groups. Two groups are pretested, two are not, one of the pretested groups and one of the unpretested groups receive the experimental treatment, and all four groups are post-tested. This is considered an experimental design and is useful in controlling for the effects of testing.
  • Source of evidence : A place where information to support a claim comes from. For instance, one could use studies published in journal articles, testimony from experts, or information from newspapers to prove or disprove a point.
  • Specificity : The proportion of people without the target disorder who have a negative test. It is used to assist in assessing and selecting a diagnostic test, sign, or symptom.
  • Specificity of variables, threat : This refers to how well the intervention or condition being studied is defined. Researchers operationalize variables in specific ways in studies. Because these variables will be operationalized differently in the population (e.g., different times of day will be used, different materials will be used, different buildings will be used), it is impossible to be certain that the way variables in a single study are used can be generalized to other ecological situations (i.e., to other combinations of time, place, and materials). This may be a threat to the external validity of a study's findings.
  • Split-ballot design : A way of arranging a study in which the sample is randomly split up into two or more groups, and each group is confronted with different forms of the question. This makes it possible to compare the response distributions of the different requests across their forms and to assess their possible relative biases.
  • Split-half reliability : A technique or method of estimating the internal consistency of an instrument in which two equal sets of scores are obtained from the same test, either one set consisting of odd items and the other set consisting of even items or the top half of the test items versus the bottom half of the test items, and are correlated. This is a test that standardizes a measuring instrument.
  • Spoon-feeding : Asking a question in a deliberate way so that it implies a particular answer, for example, “You are not really angry—correct?”
  • Spot-check record : Information gathered and written down that documents what happened in a particular place at various points in time.
  • SpPin : When a sign, test, or symptom has a high Specificity, a P ositive result rules in the diagnosis. For example, the specificity of a fluid wave for diagnosing gout (abdominal fluid) is 93%; therefore, if a person does have a fluid wave, it rules in the diagnosis of gout.
  • Spread, data : A descriptive statistic that is a measure of variability, it is the number of points between the highest and lowest scores in a study. It is also called the range (Ra).
  • Spuriousness : Incorrectly claiming a defined relationship between two variables when the relationship is simply accidental.
  • Stability of scores : The extent to which scores are reliable and consistent or repeated over time.
  • Stakeholder : A person or organization who has a vested interest in a study.
  • Standard deviation ( SD ) : The most stable measure of variability, it takes into account each and every score in a normal distribution. This descriptive statistic assesses how far individual scores vary in standard unit lengths from its midpoint of zero.
  • Standard error ( SE ) : The standard deviation of the sampling distribution of a statistic. It is a measure of the variation in the sample statistic over all possible samples of the same size. It decreases as the sample size increases, resembling the population.
  • Standard error of a statistic ( SES ) : The standard deviation of the sampling distribution of a statistic.
  • Standard error of estimate ( SEE ) : An estimate of the size of the error to be expected in predicting a criterion score.
  • Standard error of measurement ( SEM ) : An estimate of the size or magnitude of the error that one can expect in an individual's score.
  • Standard error of the difference ( SED ) : The standard deviation of a distribution of differences between different sample means.
  • Standard error of the mean ( SEM ) : The standard deviation of sample means, which indicates by how much these [Page 120] means can be expected to differ if other samples from the same population are used.
  • Standard score : A derived point score that expresses how far a given raw score is from the mean in terms of standard deviation units on the normal curve. They are typically expressed in the form of z -scores or T -scores.
  • Standardization of variables : Transforming the characteristic being studied to a comparable metric known unit, known mean, or known standard deviation. This is especially important if what is being studied is not clearly standardized, as temperature, for instance, would be. There are four usual ways of standardizing: P -standardization with percentile scores, Z -standardization with z -scores, T -scores, and D -standardization, which dichotomizes a variable.
  • Standardized measuring instruments : Statistically scrutinizing aspects of instruments, such as scales, that provide for uniform administration and scoring and generate normative data against which later results can be evaluated. The instrument is scrutinized as to how reliable and valid it is.
  • Standardized research procedure : A process that carefully specifies how a study is to be conducted.
  • Static-group comparison design : A quasi-experimental research design that involves at least two nonequivalent groups; one receives a treatment, and both are posttested.
  • Static-group pretest-posttest design : The same as the static-group comparison design, except that both groups are pretested.
  • Statistic : A numerical index describing a characteristic of a sample or used to analyze data, for example, mean, standard deviation, and so on.
  • Statistical analysis : Analyzing and scrutinizing collected data for the purposes of summarizing information to make it more usable and for making generalizations about a population based on a sample drawn from that population.
  • Statistical equating : See statistical matching.
  • Statistical matching : This is a means of equating groups using numerical prediction. This integrates data on an individual observation from one source with data on a different observation identified as the best matching or most similar record from a second source. The best match is determined by objective statistical criteria that can be checked or verified.
  • Statistical power analysis : A calculation of the probability that the test will reject a false null hypothesis (that it will not make a Type II error). As power increases, the chances of a Type II error decrease. This can be used to calculate the minimum sample size required to accept the outcome of a statistical test with a particular level of confidence.
  • Statistical regression, threat : See regression, threat.
  • Statistically significant : The conclusion that results are unlikely to have occurred due to sampling error or chance; it means that an observed correlation or difference probably exists in the population. Statistical significance is typically defined as being when the chance or probability ( p ) of the correlation occurring by chance is less than 5%, or p < .05.
  • Stereotyping : Making generalizations or assumptions about the characteristics of all members of a group based on an image or perception (often biased) of what all people in that group are like.
  • Stigmatized group : Marginalized people in society who normally have less power than others, often including racial or cultural minorities, the poor, people with some social [Page 122] stigma (like prisoners), people with disabilities or diseases, or any group of people who are looked down upon by a dominant culture.
  • Stratified random sampling : A process of selecting a sample in such a way that identified subgroups in the population are represented in the sample in the same proportions in which they exist in the population.
  • Structural Equation Modeling (SEM) : A general statistical modeling technique used to establish relationships among variables, SEM may be used as a more powerful alternative to multiple regression, path analysis, factor analysis, time series analysis, and analysis of covariance. A key feature of SEM is that observed variables are understood to represent a small number of latent constructs, which cannot be directly measured but only inferred from the measured variables using observed, latent, dependent, and independent variables.
  • Structured interview : A formal type of interview in which the researcher asks, in order, a set of predetermined questions to collect data, for example, an assessment protocol for persons to receive case management services.
  • Structured observation : Systematic recording of information about an event in forms that create parameters for the observation, most often in a checklist or tally fashion.
  • Study end point : The primary or secondary outcome used to judge the effectiveness of a treatment.
  • Study locale, site : A place where research is actually conducted. This could be in a laboratory or in situ, in a natural setting, or in more than one setting.
  • Sufficient literature, search : The researcher investigates a topic through a minimum of three search engines or databases. Then, the researcher tracks down a number of [Page 123] relevant leads to appraise various published and unpublished sources and materials about a topic he or she is studying. This location and appraisal of information can be done within the researcher's own field (e.g., social work) and from related fields or discipline areas about the topic of interest (e.g., nursing, psychology, sociology, education).
  • Summary : A brief overview of a research project, for example, an abstract, synopsis, executive summary, and so on.
  • Summative evaluation : An assessment of a program or intervention done after it has been implemented. This technique provides information about the program's outcomes and its ability to do what it was designed to do. For example, did the program meet its specified goals, objectives, and outcomes? It is typical of program evaluation research.
  • Summative measuring instrument : A test used at the end of an intervention or program to judge the effectiveness of the intervention or program overall.
  • Support costs : Costs that accrue to a program or project due to the support services and facilities used. Measures of support costs are typically used in program evaluation studies.
  • Survey interview : Research, often conducted by mail, e-mail, or phone, in which large numbers of participants are asked questions, and their answers are compiled to create statistical results.
  • Survey questionnaire : A research tool given to participants, who answer questions and then return their answers to the researchers.
  • Survival analysis : Data analysis that measures time to an event, for example, death or next episode of disease.
  • Systematic bias, threat : Any generally unintentional prejudice or inclination arising from constraints, such as [Page 124] observer bias and selection bias. This affects the accuracy of research results. Systematic bias and errors are consistent and repeatable, in contrast to random bias. This may be a threat to the internal validity of a study.
  • Systematic error : Biases in measurement that lead to a situation where the mean of many separate measurements differs significantly from the actual value of the measured attribute.
  • Systematic random sampling : Regular sampling using a known interval that begins with a randomized start. For instance, an individual sampling frame of households is established, and households to be sampled are selected using a constant sampling step.
  • Systematic review : A review of a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research and to collect and analyze data from the studies that are included in the review.
  • Systematic sampling : A selection procedure in which all sample elements are determined after the selection of the first element, as each element on a selected list is separated from the first element by a multiple of the selection interval. For example, every tenth element may be selected.
  • T -score : T -scores are directly related to z -scores. With T -scores, the mean of the raw score distribution is equated to 50, and the standard deviation is equated to 10. Therefore, a z -score of +1.00 would be equal to a T -score of 60. T -scores are expressed as whole numbers. Thus, a z -score of −3.00 is equal to a T -score of 20. The elimination of negative numbers encountered with the use of z -scores is an advantage of using them.
  • Table : An orderly arrangement of data, especially one in which the data are formatted in columns and rows in an essentially rectangular form.
  • Table of random numbers : An organized arrangement of numbers generated in an unpredictable, haphazard sequence. Tables of random numbers are used to create a random sample. A random number table is therefore also called a random sample table. Many introductory statistics texts have examples of these.
  • Tacit knowledge : Knowledge that is difficult to transfer to another person by means of writing it down or verbalizing it.
  • Tag word : A nonhierarchical key word or term assigned to a piece of information.
  • Tangible costs : Costs that can be easily expressed in dollar amounts. They are typically used in program evaluation studies.
  • Target behavior : An isolated way of acting, selected as the object for a conduct change intervention or program. Target behaviors are also deemed to be the specified outcomes of the intervention or program.
  • Target population : The population to which the researcher, ideally, would like to generalize results.
  • Target problem : The issue or outcome being addressed by an intervention.
  • Telemarketing : The use of the telephone as an interactive medium for promotion and sales.
  • Telephone survey : A method of conducting a survey that involves calling participants on the telephone and asking questions from a prepared questionnaire.
  • Test of significance : An inferential statistical test used to determine whether or not the obtained results for a [Page 126] sample are likely to represent the population, for example, Chi-square, t -test, F -test, ANOVA, and so on.
  • Test question : An inquiry that gathers information about what a test taker reports.
  • Test-retest reliability : A procedure for determining the extent to which scores from an instrument are reliable over time by correlating the scores from, minimally, two administrations of the same instrument to the same individuals. This is a test that standardizes a measuring instrument.
  • Testing effect, threat : The prior measurement of the dependent variable may affect the results obtained from subsequent measurements. This may be a threat to the internal validity of a study.
  • Testing potential : The ability of a phenomenon to be assessed or researched.
  • Thematic analysis : This categorizes ideas, words, phrases, and sentences to either major or minor themes from the data. This is used frequently in qualitative data analysis.
  • Thematic notes : Information written down from an interview about particular groups of ideas or themes from the interview.
  • Theme : A unifying, recurrent, or dominant idea or motif that can be identified from the data set.
  • Theoretical sampling : This is a data collection process directed by evolving propositions or constructs during the course of a qualitative research study.
  • Theory : A coherent group of general propositions used as principles to develop explanations for social and behavioral phenomena.
  • Threat to external validity : Something that decreases the likelihood that the conclusions in a study would hold for [Page 127] other persons in other places and at other times, or the study's generalizability.
  • Threat to internal validity : An alternative explanation for research results, that is, that an observed relationship is an artifact of another variable.
  • Time considerations : Checklists and other ways of estimating how many minutes or hours will be needed to undertake the proposed research. These are normally mentioned in the method or procedure part of the study report.
  • Time effect, threat : A historical event at the time of a study that happens to all participants and alters the results.
  • Time order : A criterion for determining causality in addition to association. Time order means that the variation in the dependent variable occurred after the variation in the independent variable. Logically, any presumed cause must occur prior to any observed effect.
  • Time sequence : Pattern developed by the times at which the various episodes of a narrative take place. This is a way of tracking what happens chronologically in a study.
  • Time series design : A quasi-experimental design involving one group that is repeatedly measured, then exposed to an intervention, and repeatedly posttested.
  • Tradition : The handing down of statements, beliefs, legends, folklore, mores, customs, information, and so on, from generation to generation, especially by word of mouth or by practice.
  • Transcribing qualitative data : Taking audio recordings and writing them up to make manuscripts of the interviews. These verbatim write-ups are easier to use than the audio recordings when writing a qualitative research article.
  • Transcript : A written record of spoken language.
  • Transferability : A process performed by readers of research in which they note the specifics of the research situation and compare them with the specifics of an environment or situation with which they are familiar. If there are enough similarities between the two situations, readers may be able to infer that the results of the research would be the same or similar in their own situation. In other words, they transfer the results of a study to another context. This is related to generalizability but is a term more often used in qualitative research, which would have low generalizability but could have good transferability.
  • Transparency : Openness and accountability in research communication.
  • ABI (absolute benefit increase). The absolute arithmetic difference in rates of good outcomes between experimental and control participants in a trial, calculated as EER – CER and accompanied by a 95% confidence interval (CI).
  • ARI (absolute risk increase). The absolute arithmetic difference in rates of bad outcomes between experimental and control participants in a trial, calculated as EER – CER and accompanied by a 95% confidence interval (CI).
  • ARR (absolute risk reduction). The absolute arithmetic difference in rates of bad outcomes between experimental and control participants in a trial, calculated as EER – CER and accompanied by a 95% confidence interval (CI).
  • NNH (number needed to harm). The number of participants who, if they received the experimental treatment, would result in one additional participant being harmed [Page 129] compared with participants who received the control treatment, calculated as 1/ARI and accompanied by a 95% confidence interval (CI).
  • NNT (number needed to treat). The number of participants who need to be treated to achieve one additional good outcome, calculated as 1/ARR and accompanied by a 95% confidence interval (CI).
  • RBI (relative benefit increase). The proportional increase in rates of good outcomes between experimental and control participants in a trial, calculated as [EER – CER]/CER and accompanied by a 95% confidence interval (CI).
  • RRI (relative risk increase). The proportional increase in rates of bad outcomes between experimental and control participants in a trial, calculated as [EER –CER]/CER and accompanied by a 95% confidence interval (CI).
  • RRR (relative risk reduction). The proportional reduction in rates of bad outcomes between experimental and control participants in a trial, calculated as [EER – CER]/CER and accompanied by a 95% confidence interval (CI).
  • Treatment fidelity : Specific checks placed in a study to confirm that the manipulation of the independent variable occurred as planned. It includes things such as treatment definitions specified, implementer training, treatment manuals written, supervision of treatment agents, sampling for consistency, proper utilization of data collection strategies, and so on.
  • Treatment group : Persons in the study who receive the independent variable that is being assessed to see if it makes a difference with these participants.
  • Treatment intervention : Planned care provided to improve a situation. Research explores the effectiveness of these interventions.
  • Treatment trial : This refers to a clinical trial that tests new interventions.
  • Treatment variable : The factor or condition that is manipulated (systematically altered) in an intervention study by the researcher. The main treatment is usually described by the letter X.
  • Trend study : A longitudinal design in survey research in which the same population (conceptually but not literally) is studied over time by taking repeated samples.
  • Triad : Nonquantifiable method of questioning three people simultaneously to understand why behaviors and opinions are as they are.
  • Triangulation : A technique often used to establish credibility in qualitative social research. It involves using more than two perspectives to determine the accuracy of some aspect of the study. It typically refers to concurrently using multiple data sources, data raters, or other research methods.
  • Truncation statistics : To shorten a number by dropping one or more digits after the decimal point.
  • Trustworthiness, qualitative research : The reliability and validity of qualitative research is conceptualized as: (1) descriptive validity, which refers to the factual accuracy of the account as reported by the qualitative researcher; (2) interpretive validity, which is obtained to the degree that the participants’ viewpoints, thoughts, intentions, and experiences are accurately understood and reported by the qualitative researcher; and (3) theoretical validity, which is obtained to the degree that a theory or theoretical explanation developed from a research study fits the data and is, therefore, credible and defensible.
  • Tuskegee syphilis study : A landmark clinical study conducted between 1932 and 1972 in Tuskegee, Alabama, by the U.S. [Page 131] Public Health Service. The participants in the study were not treated for their syphilis even though an effective treatment was available. This is arguably the most infamous unethical biomedical research study conducted in U.S. history and led to the 1979 Belmont Report and the establishment of the Office for Human Research Protections (OHRP). It also led to federal regulation requiring institutional review boards (IRBs) for protection of human participants in studies involving them.
  • Two-tailed test of statistical significance : Using both tails of a sampling distribution of a statistic when a nondirectional hypothesis is stated.
  • Type I error : A conclusion that a treatment or intervention works when it actually does not. The risk of a Type I error is often called alpha. In a statistical test, it describes the chance of rejecting the null hypothesis when it is in fact true. It is also called a false positive.
  • Type II error : A conclusion that there is no evidence a treatment works when it actually does work. The risk of a Type II error is often called beta. In a statistical test, it describes the chance of not rejecting the null hypothesis when it is in fact false. The risk of a Type II error decreases as the number of the participants in a study increases. It is also called a false negative.
  • Uncertainty : A situation where the current state of knowledge is such that: (1) the order or nature of things is unknown; (2) the consequences, extent, or magnitude of circumstances, conditions, or events is unpredictable; and (3) credible probabilities to possible outcomes cannot be assigned. Although too much uncertainty is undesirable, manageable uncertainty [Page 132] provides the freedom to make creative decisions. Some degree of uncertainty remains even in very carefully designed research studies.
  • Unit of analysis : The primary unit used in data reduction and analysis, for example, individuals, families, objects, groups, classrooms, organizations, communities, and so on.
  • Units of meaning : The smallest portion of an idea that can be regarded as a structural or functional whole.
  • Unobtrusive data collection : Measures that don't require the researcher to visibly intrude in the research context. For instance, an inconspicuous computer sensor hidden under a floor mat could record the amount of time that people spend at a museum display. Unobtrusive measurement should reduce the biases that result from the intrusion of the researcher or measurement instrument. However, unobtrusive measures reduce the degree the researcher has control over the type of data collected.
  • Unobtrusive measures : Measures obtained without participants being aware that they are being observed or measured or by examining inanimate objects (such as a school suspension list) that can be used in order to obtain desired information.
  • Unrepresentative sample, threat : When the sample does not represent or mirror the population. This usually results from an inability to randomly select the sample from the population to which a researcher wants to generalize. This is a threat to the external validity of a study's findings.
  • Unstructured interview : A series of questions presented by a researcher without any fixed set format but in which the interviewer may have some key questions formulated in advance. They allow for questions based on the interviewee's responses and proceed like a friendly, nonthreatening conversation.
  • Unstructured observation : A research technique in which the characteristics that will be observed are not predetermined. The researcher simply takes notes on the behaviors observed, as they arise.
  • Validity :The degree to which accurate inferences can be made based on results from an instrument. This depends not only on the instrument itself but also on the instrumentation process and how it is administered.
  • Validity coefficient : An index of the validity of two or more scores of instruments; a special application of the correlation coefficient ( r ).
  • Value : A belief of a person or social group in which they have an emotional and personal investment.
  • Value awareness : An understanding of how or what a researcher does, relating to and contributing to the overall purposes of an organization or program.
  • Value-for-money : An important goal for public officials and other stakeholders who are concerned with whether taxpayers and citizens are receiving efficient and effective programs and services for their tax dollars. It is typically used in program evaluation studies.
  • Value label : Putting a word to a numerical data entry point. This allows values of numeric and string variables to be associated with labels.
  • Variability : How spread out or closely clustered a set of data is.
  • Variability measures : How spread out a group of scores is. There are four frequently used descriptive statistics that are measures of variability: the range, interquartile range, variance, and standard deviation.
  • Variable : Any entity that can take on different values. Some values are numerical, but variables are not always quantitative or numerical. For instance, the variable gender consists of two text values: male and female.
  • Variable relationship study design : Such studies test the relationships between variables for determining how they impact, associate, predict, or influence each other.
  • Variable relationships : To test relationships between independent and dependent variables to determine how they influence each other.
  • Variance ( SD 2 ) : The square of the standard deviation and a main measure of variability.
  • Venn diagram : An illustration using circles to represent sets, with the position and overlap of the circles indicating the relationships between the sets.
  • Verification of the independent variable, threat : The extent to which one can reproduce the exact implementation of the independent variable. This may be a threat to the external validity of a study.
  • Videotape recording : A visual recording of events captured on videocassettes and played on a VCR. Videotaping has largely been replaced by recording onto DVDs or memory chips.
  • Visual aid : A presentation tool, such as a poster, PowerPoint, model, or video, that presents information visually.
  • Voluntariness : The ethical requirement that research participants are taking part in an experiment willingly without constraint or expectation of reward. Sometimes, small amounts of money or other forms of compensation are used to compensate for the participants’ time, but these cannot be so valuable that a person feels involuntarily compelled to participant.
  • Voluntary consent : When a research participant agrees to participate through his or her own free will, without any force or undue pressure or influence. This is an ethical imperative when choosing research participants.
  • Weighting : Specification of the relative importance of items when combined.
  • White paper : An authoritative report, directive, or guide that addresses specific issues and how to resolve them. It is often used to educate readers and help inform a decision-making process.
  • Withdrawal design : A way of arranging a study that involves presentation and then subsequent removal of an independent variable.
  • Within-group research design : A type of experimental design where one looks at changes in outcome measures across treatments rather than comparing the changes with another (comparison) group.
  • Wording : The way that a question is phrased. Making questions and requests clear is important in research studies.
  • Working hypothesis : A proposition, or set of propositions, set forth as an explanation for the occurrence of some specified group of phenomena, either asserted merely as a provisional conjecture, or to guide investigation.
  • X-axis : The axis of a two-dimensional Cartesian coordinate graph that is usually drawn left to right (horizontally) [Page 136] and usually shows the range of values of an independent variable. It is also known as the abscissa.
  • Y-axis : The vertical axis of a two-dimensional Cartesian coordinate graph that usually shows the frequency of occurrence of the variable being studied. It is also known as the ordinate.
  • Z-score : A transformed score often called a standard score. The z -score for an item indicates how far and in what direction that item deviates from its distribution's mean, expressed in units of its distribution's standard deviation ( SD ). The mathematics of the z -score transformation are such that, if every item in a distribution is converted to its z -score, the transformed score will necessarily have a mean of 0 and an SD of 1.

The Glossary

Commonly Used Acronyms, Symbols, Abbreviations, and Terms Found in Research and Evaluation Studies

Sign in to access this content

Get a 30 day free trial, more like this, sage recommends.

We found other relevant content for you on other Sage platforms.

Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches

  • Sign in/register

Navigating away from this page will delete your results

Please save your results to "My Self-Assessments" in your profile before navigating away from this page.

Sign in to my profile

Please sign into your institution before accessing your profile

Sign up for a free trial and experience all Sage Learning Resources have to offer.

You must have a valid academic email address to sign up.

Get off-campus access

  • View or download all content my institution has access to.

Sign up for a free trial and experience all Sage Learning Resources has to offer.

  • view my profile
  • view my lists

FIU Libraries Logo

  •   LibGuides
  •   A-Z List
  •   Help

Research Methods Help Guide

Definitions you need to know.

  • Types of Data
  • Types of Research
  • Types of Studies
  • Helpful Resources
  • Get Help @ FIU

More Information

  • Glossary of Key Terms Glossary of research methods terms provided by the Writing Studio at Colorado State University.
  • Glossary of Statistical Terms Extensive and in-depth list of statistical terms provided by the University of California Berkeley.
  • Statistical Definitions Answers to statistics study questions that provide many concise definitions of statistical concepts.
  • Statistics Glossary Created by Valerie J. Easton and John H. McColl of the University of Glasgow. Contains many other definitions not included in this LibGuide.

Constant: a fixed value. Not a variabl e .

Variable: a value or characteristic that differs among individuals. It can be described, counted, or measured.

Independent Variable: the variable researchers manipulate in an experiment. Affects the dependent variable(s) .

Dependent Variable: the variable affected by the independent variable in an experiment.

  • Types of Variables

Sample: a smaller group selected from a larger group (the population ). Researchers study samples to draw conclusions about the larger group.

Population: the entire collection of people, animals, plants, things, etc. researchers wish to examine. Since populations are often too large to include every member in a study, researchers work with samples to describe or draw conclusions about the entire population group using statistical tests.

  • More Information on Samples, Sampling, and Populations Scroll down to the "Populations and Samples" section.

Participant: an individual who participates in a study. This is a more recent term. (Used with human research.)

Subject: another way to describe a participant. This is a more traditional term. (Used with human and animal research.)

Attrition: loss of participants/subjects in a study.

Reliability: the extent to which a measure or tool yields consistent results on repeated trials.

Validity: the degree to which a study accurately assesses what it is attempting to assess.

  • Types of Reliability and Validity
  • << Previous: Welcome
  • Next: Types of Data >>
  • Last Updated: Dec 10, 2024 1:27 PM
  • URL: https://library.fiu.edu/researchmethods

Information

Fiu libraries floorplans, green library, modesto a. maidique campus, hubert library, biscayne bay campus.

Federal Depository Library Program logo

Directions: Green Library, MMC

Directions: Hubert Library, BBC

Eduqas Logo

Understanding research methods terminology

This resource has been designed to help students understand research methods terminology. Students will be able to learn definitions of research methods terms before moving on to testing their knowledge.  This resource can be used as a useful testing tool after each section of the research methods content has been taught.  Students will also find this useful as a revision tool.

research methods key terms table

Not seeing what you want? Is there a problem with the files? Do you have a suggestion? Please give us feedback, we welcome all correspondence from our users.

research methods key terms table

Want to get the latest resources for your subject?

Free resources and subject updates delivered straight to your inbox.

IMAGES

  1. 15 Types of Research Methods (2024)

    research methods key terms table

  2. Research Methods

    research methods key terms table

  3. Research methods table

    research methods key terms table

  4. Glossary of Key Terms of RESEARCH.pdf

    research methods key terms table

  5. Sociology research methods key-terms

    research methods key terms table

  6. Research Terms

    research methods key terms table

COMMENTS

  1. Research Methods Key Term Glossary - tutor2u

    Mar 22, 2021 · This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology. Don't forget to also make full use of our research methods study notes and revision quizzes to support your studies and exam revision. Aim

  2. GLOSSARY OF KEY TERMS IN EDUCATIONAL RESEARCH

    research terminologies in educational research. It provides definitions of many of the terms used in the guidebooks to conducting qualitative, quantitative, and mixed methods of research. The terms are arranged in alphabetical order. Abstract A brief summary of a research project and its findings. A summary of a study that

  3. Glossary of Research Terms - Research Guides at University of ...

    4 days ago · Colorado State University; Glossary A-Z. Education.com; Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Servive Technology. University of Southampton; Miller, Robert L. and Brewer, John D. The A-Z of Social Research: A Dictionary of Key Social Science Research Concepts London: SAGE, 2003; Jupp, Victor.

  4. Glossary of key research terms - McHenry County College

    Experimental Research A researcher working within this methodology creates an environment in which to observe and interpret the results of a research question. A key element in experimental research is that participants in a study are randomly assigned to groups. In an attempt to create a causal model (i.e., to discover the causal origin of

  5. Research Methods Glossary of Key Terms Flashcards - Quizlet

    A glossary of the key terms in the A LEVEL Research Methods Unit Learn with flashcards, games and more — for free. ... A table of results used in a chi squared test ...

  6. STUDY SKILLS: GLOSSARY OF COMMON TERMS Research TErms

    Research TErms 1 | P a g e Study Skills at NUA (2019): [email protected] STUDY SKILLS: GLOSSARY OF COMMON TERMS Abstract A paragraph of usually no more than 250 words that provides a summary to a research report. It should cover the topic, methods and results. Appendix (singular) or Appendices (plural)

  7. Research Methods Key Terms - lymmhigh.org.uk

    Frequency table A table is a systematic way of representing data so it is organised in rows and columns Histogram A type of graph where the frequency of each category of continuous data is represented by the height of the bar Normal distribution A symmetrical spread of frequency data that forms a bell-shaped curve. The mean, median

  8. Glossary of Research Terms - SAGE Publications Inc

    This book contains over 1500 research and statistical terms, written in jargon-free, easy-to-understand terminology to help students understand difficult concepts in their research courses. This pocket guide is in an ideal supplement to the many discipline-specific texts on research methods and statistics.

  9. FIU Libraries: Research Methods Help Guide: Quick Glossary

    Dec 10, 2024 · This is a more recent term. (Used with human research.) Subject: another way to describe a participant. This is a more traditional term. (Used with human and animal research.) Attrition: loss of participants/subjects in a study.

  10. Understanding research methods terminology - Eduqas

    This resource has been designed to help students understand research methods terminology. Students will be able to learn definitions of research methods terms before moving on to testing their knowledge. This resource can be used as a useful testing tool after each section of the research methods content has been taught.