Mar 22, 2021 · This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology. Don't forget to also make full use of our research methods study notes and revision quizzes to support your studies and exam revision. ... A research condition in which no one, including the researcher, knows the identities of research participants. ANOVA (Analysis of Variance) A method of statistical analysis broadly applicable to a number of research designs, used to determine differences among the means of two or more groups on a variable. ... 5 days ago · Colorado State University; Glossary A-Z. Education.com; Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Servive Technology. University of Southampton; Miller, Robert L. and Brewer, John D. The A-Z of Social Research: A Dictionary of Key Social Science Research Concepts London: SAGE, 2003; Jupp, Victor. ... research terminologies in educational research. It provides definitions of many of the terms used in the guidebooks to conducting qualitative, quantitative, and mixed methods of research. The terms are arranged in alphabetical order. Abstract A brief summary of a research project and its findings. A summary of a study that ... Aug 2, 2016 · Definitions of core concepts covered as part of the research methods component of AS and A Level Sociology. Organised in alphabetical order - so effectively this is a research methods A-Z. If this is too much for you, then have a look at my 'top ten research methods concepts' first! For more information about research ... Research TErms 1 | P a g e Study Skills at NUA (2019): [email protected] STUDY SKILLS: GLOSSARY OF COMMON TERMS Abstract A paragraph of usually no more than 250 words that provides a summary to a research report. It should cover the topic, methods and results. Appendix (singular) or Appendices (plural) ... 1.4 Understanding Key Research Concepts and Terms In this textbook you will be exposed to many terms and concepts associated with research methods, particularly as they relate to the research planning decisions you must make along the way. Figure 1.3 will help you contextualize many of these terms and understand the research process. ... Dec 10, 2024 · This is a more recent term. (Used with human research.) Subject: another way to describe a participant. This is a more traditional term. (Used with human and animal research.) Attrition: loss of participants/subjects in a study. ... Blank-key-terms-guide Key term Definition Aim An aim identifies the purpose of the investigation. Hypothesis A hypothesis (plural hypotheses) is a precise, testable statement of what the researchers predict/s will be the outcome of the study. ... In co-produced or participatory research, the research deliberately sets out with the intention that the research questions and methods will be, at least in part, decided with the participants being researched. This also has the effect of sharing power more evenly with the participants. ... ">

key terms research methods

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

Research Methods Key Term Glossary

Last updated 22 Mar 2021

  • Share on Facebook
  • Share on Twitter
  • Share by Email

This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology.

Don't forget to also make full use of our research methods study notes and revision quizzes to support your studies and exam revision.

The researcher’s area of interest – what they are looking at (e.g. to investigate helping behaviour).

A graph that shows the data in the form of categories (e.g. behaviours observed) that the researcher wishes to compare.

Behavioural categories

Key behaviours or, collections of behaviour, that the researcher conducting the observation will pay attention to and record

In-depth investigation of a single person, group or event, where data are gathered from a variety of sources and by using several different methods (e.g. observations & interviews).

Closed questions

Questions where there are fixed choices of responses e.g. yes/no. They generate quantitative data

Co-variables

The variables investigated in a correlation

Concurrent validity

Comparing a new test with another test of the same thing to see if they produce similar results. If they do then the new test has concurrent validity

Confidentiality

Unless agreed beforehand, participants have the right to expect that all data collected during a research study will remain confidential and anonymous.

Confounding variable

An extraneous variable that varies systematically with the IV so we cannot be sure of the true source of the change to the DV

Content analysis

Technique used to analyse qualitative data which involves coding the written data into categories – converting qualitative data into quantitative data.

Control group

A group that is treated normally and gives us a measure of how people behave when they are not exposed to the experimental treatment (e.g. allowed to sleep normally).

Controlled observation

An observation study where the researchers control some variables - often takes place in laboratory setting

Correlational analysis

A mathematical technique where the researcher looks to see whether scores for two covariables are related

Counterbalancing

A way of trying to control for order effects in a repeated measures design, e.g. half the participants do condition A followed by B and the other half do B followed by A

Covert observation

Also known as an undisclosed observation as the participants do not know their behaviour is being observed

Critical value

The value that a test statistic must reach in order for the hypothesis to be accepted.

After completing the research, the true aim is revealed to the participant. Aim of debriefing = to return the person to the state s/he was in before they took part.

Involves misleading participants about the purpose of s study.

Demand characteristics

Occur when participants try to make sense of the research situation they are in and try to guess the purpose of the research or try to present themselves in a good way.

Dependent variable

The variable that is measured to tell you the outcome.

Descriptive statistics

Analysis of data that helps describe, show or summarize data in a meaningful way

Directional hypothesis

A one-tailed hypothesis that states the direction of the difference or relationship (e.g. boys are more helpful than girls).

Dispersion measure

A dispersion measure shows how a set of data is spread out, examples are the range and the standard deviation

Double blind control

Participants are not told the true purpose of the research and the experimenter is also blind to at least some aspects of the research design.

Ecological validity

The extent to which the findings of a research study are able to be generalized to real-life settings

Ethical guidelines

These are provided by the BPS - they are the ‘rules’ by which all psychologists should operate, including those carrying out research.

Ethical issues

There are 3 main ethical issues that occur in psychological research – deception, lack of informed consent and lack of protection of participants.

Evaluation apprehension

Participants’ behaviour is distorted as they fear being judged by observers

Event sampling

A target behaviour is identified and the observer records it every time it occurs

Experimental group

The group that received the experimental treatment (e.g. sleep deprivation)

External validity

Whether it is possible to generalise the results beyond the experimental setting.

Extraneous variable

Variables that if not controlled may affect the DV and provide a false impression than an IV has produced changes when it hasn’t.

Face validity

Simple way of assessing whether a test measures what it claims to measure which is concerned with face value – e.g. does an IQ test look like it tests intelligence.

Field experiment

An experiment that takes place in a natural setting where the experimenter manipulates the IV and measures the DV

A graph that is used for continuous data (e.g. test scores). There should be no space between the bars, because the data is continuous.

This is a formal statement or prediction of what the researcher expects to find. It needs to be testable.

Independent groups design

An experimental design where each participants only takes part in one condition of the IV

Independent variable

The variable that the experimenter manipulates (changes).

Inferential statistics

Inferential statistics are ways of analyzing data using statistical tests that allow the researcher to make conclusions about whether a hypothesis was supported by the results.

Informed consent

Psychologists should ensure that all participants are helped to understand fully all aspects of the research before they agree (give consent) to take part

Inter-observer reliability

The extent to which two or more observers are observing and recording behaviour in the same way

Internal validity

In relation to experiments, whether the results were due to the manipulation of the IV rather than other factors such as extraneous variables or demand characteristics.

Interval level data

Data measured in fixed units with equal distance between points on the scale

Investigator effects

These result from the effects of a researcher’s behaviour and characteristics on an investigation.

Laboratory experiment

An experiment that takes place in a controlled environment where the experimenter manipulates the IV and measures the DV

Matched pairs design

An experimental design where pairs of participants are matched on important characteristics and one member allocated to each condition of the IV

Measure of central tendency calculated by adding all the scores in a set of data together and dividing by the total number of scores

Measures of central tendency

A measurement of data that indicates where the middle of the information lies e.g. mean, median or mode

Measure of central tendency calculated by arranging scores in a set of data from lowest to highest and finding the middle score

Meta-analysis

A technique where rather than conducting new research with participants, the researchers examine the results of several studies that have already been conducted

Measure of central tendency which is the most frequently occurring score in a set of data

Natural experiment

An experiment where the change in the IV already exists rather than being manipulated by the experimenter

Naturalistic observation

An observation study conducted in the environment where the behaviour would normally occur

Negative correlation

A relationship exists between two covariables where as one increases, the other decreases

Nominal level data

Frequency count data that consists of the number of participants falling into categories. (e.g. 7 people passed their driving test first time, 6 didn’t).

Non-directional hypothesis

A two-tailed hypothesis that does not predict the direction of the difference or relationship (e.g. girls and boys are different in terms of helpfulness).

Normal distribution

An arrangement of a data that is symmetrical and forms a bell shaped pattern where the mean, median and mode all fall in the centre at the highest peak

Observed value

The value that you have obtained from conducting your statistical test

Observer bias

Occurs when the observers know the aims of the study study or the hypotheses and allow this knowledge to influence their observations

Open questions

Questions where there is no fixed response and participants can give any answer they like. They generate qualitative data.

Operationalising variables

This means clearly describing the variables (IV and DV) in terms of how they will be manipulated (IV) or measured (DV).

Opportunity sample

A sampling technique where participants are chosen because they are easily available

Order effects

Order effects can occur in a repeated measures design and refers to how the positioning of tasks influences the outcome e.g. practice effect or boredom effect on second task

Ordinal level data

Data that is capable of being out into rank order (e.g. places in a beauty contest, or ratings for attractiveness).

Overt observation

Also known as a disclosed observation as the participants given their permission for their behaviour to be observed

Participant observation

Observation study where the researcher actually joins the group or takes part in the situation they are observing.

Peer review

Before going to publication, a research report is sent other psychologists who are knowledgeable in the research topic for them to review the study, and check for any problems

Pilot study

A small scale study conducted to ensure the method will work according to plan. If it doesn’t then amendments can be made.

Positive correlation

A relationship exists between two covariables where as one increases, so does the other

Presumptive consent

Asking a group of people from the same target population as the sample whether they would agree to take part in such a study, if yes then presume the sample would

Primary data

Information that the researcher has collected him/herself for a specific purpose e.g. data from an experiment or observation

Prior general consent

Before participants are recruited they are asked whether they are prepared to take part in research where they might be deceived about the true purpose

Probability

How likely something is to happen – can be expressed as a number (0.5) or a percentage (50% change of tossing coin and getting a head)

Protection of participants

Participants should be protected from physical or mental health, including stress - risk of harm must be no greater than that to which they are exposed in everyday life

Qualitative data

Descriptive information that is expressed in words

Quantitative data

Information that can be measured and written down with numbers.

Quasi experiment

An experiment often conducted in controlled conditions where the IV simply exists so there can be no random allocation to the conditions

Questionnaire

A set of written questions that participants fill in themselves

Random sampling

A sampling technique where everyone in the target population has an equal chance of being selected

Randomisation

Refers to the practice of using chance methods (e.g. flipping a coin' to allocate participants to the conditions of an investigation

The distance between the lowest and the highest value in a set of scores.

A measure of dispersion which involves subtracting the lowest score from the highest score in a set of data

Reliability

Whether something is consistent. In the case of a study, whether it is replicable.

Repeated measures design

An experimental design where each participants takes part in both/all conditions of the IV

Representative sample

A sample that that closely matched the target population as a whole in terms of key variables and characteristics

Retrospective consent

Once the true nature of the research has been revealed, participants should be given the right to withdraw their data if they are not happy.

Right to withdraw

Participants should be aware that they can leave the study at any time, even if they have been paid to take part.

A group of people that are drawn from the target population to take part in a research investigation

Scattergram

Used to plot correlations where each pair of values is plotted against each other to see if there is a relationship between them.

Secondary data

Information that someone else has collected e.g. the work of other psychologists or government statistics

Semi-structured interview

Interview that has some pre-determined questions, but the interviewer can develop others in response to answers given by the participant

A statistical test used to analyse the direction of differences of scores between the same or matched pairs of subjects under two experimental conditions

Significance

If the result of a statistical test is significant it is highly unlikely to have occurred by chance

Single-blind control

Participants are not told the true purpose of the research

Skewed distribution

An arrangement of data that is not symmetrical as data is clustered ro one end of the distribution

Social desirability bias

Participants’ behaviour is distorted as they modify this in order to be seen in a positive light.

Standard deviation

A measure of the average spread of scores around the mean. The greater the standard deviation the more spread out the scores are. .

Standardised instructions

The instructions given to each participant are kept identical – to help prevent experimenter bias.

Standardised procedures

In every step of the research all the participants are treated in exactly the same way and so all have the same experience.

Stratified sample

A sampling technique where groups of participants are selected in proportion to their frequency in the target population

Structured interview

Interview where the questions are fixed and the interviewer reads them out and records the responses

Structured observation

An observation study using predetermined coding scheme to record the participants' behaviour

Systematic sample

A sampling technique where every nth person in a list of the target population is selected

Target population

The group that the researchers draws the sample from and wants to be able to generalise the findings to

Temporal validity

Refers to how likely it is that the time period when a study was conducted has influenced the findings and whether they can be generalised to other periods in time

Test-retest reliability

Involves presenting the same participants with the same test or questionnaire on two separate occasions and seeing whether there is a positive correlation between the two

Thematic analysis

A method for analysing qualitative data which involves identifying, analysing and reporting patterns within the data

Time sampling

A way of sampling the behaviour that is being observed by recording what happens in a series of fixed time intervals.

Type 1 error

Is a false positive. It is where you accept the alternative/experimental hypothesis when it is false

Type 2 error

Is a false negative. It is where you accept the null hypothesis when it is false

Unstructured interview

Also know as a clinical interview, there are no fixed questions just general aims and it is more like a conversation

Unstructured observation

Observation where there is no checklist so every behaviour seen is written down in an much detail as possible

Whether something is true – measures what it sets out to measure.

Volunteer sample

A sampling technique where participants put themselves forward to take part in research, often by answering an advertisement

You might also like

Explanations for conformity, cultural variations in attachment, emergence of psychology as a science: the laboratory experiment, scoville and milner (1957), kohlberg (1968), schizophrenia: what is schizophrenia, biopsychology: the pns – somatic and autonomic nervous systems, relationships: duck's phase model of relationship breakdown, our subjects.

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

Glossary of research terms.

  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Resources
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

This glossary is intended to assist you in understanding commonly used terms and concepts when reading, interpreting, and evaluating scholarly research. Also included are common words and phrases defined within the context of how they apply to research in the social and behavioral sciences. Definitions have been adapted from the sources cited below.

  • Acculturation -- refers to the process of adapting to another culture, particularly in reference to blending in with the majority population [e.g., an immigrant adopting American customs]. However, acculturation also implies that both cultures add something to one another, but still remain distinct groups unto themselves.
  • Accuracy -- a term used in survey research to refer to the match between the target population and the sample.
  • Affective Measures -- procedures or devices used to obtain quantified descriptions of an individual's feelings, emotional states, or dispositions.
  • Aggregate -- a total created from smaller units. For instance, the population of a county is an aggregate of the populations of the cities, rural areas, etc. that comprise the county. As a verb, it refers to total data from smaller units into a large unit.
  • Anonymity -- a research condition in which no one, including the researcher, knows the identities of research participants.
  • Baseline -- a control measurement carried out before an experimental treatment.
  • Behaviorism -- school of psychological thought concerned with the observable, tangible, objective facts of behavior, rather than with subjective phenomena such as thoughts, emotions, or impulses. Contemporary behaviorism also emphasizes the study of mental states such as feelings and fantasies to the extent that they can be directly observed and measured.
  • Beliefs -- ideas, doctrines, tenets, etc. that are accepted as true on grounds which are not immediately susceptible to rigorous proof.
  • Benchmarking -- systematically measuring and comparing the operations and outcomes of organizations, systems, processes, etc., against agreed upon "best-in-class" frames of reference.
  • Bias -- a loss of balance and accuracy in the use of research methods. It can appear in research via the sampling frame, random sampling, or non-response. It can also occur at other stages in research, such as while interviewing, in the design of questions, or in the way data are analyzed and presented. Bias means that the research findings will not be representative of, or generalizable to, a wider population.
  • Case Study -- the collection and presentation of detailed information about a particular participant or small group, frequently including data derived from the subjects themselves.
  • Causal Hypothesis -- a statement hypothesizing that the independent variable affects the dependent variable in some way.
  • Causal Relationship -- the relationship established that shows that an independent variable, and nothing else, causes a change in a dependent variable. It also establishes how much of a change is shown in the dependent variable.
  • Causality -- the relation between cause and effect.
  • Central Tendency -- any way of describing or characterizing typical, average, or common values in some distribution.
  • Chi-square Analysis -- a common non-parametric statistical test which compares an expected proportion or ratio to an actual proportion or ratio.
  • Claim -- a statement, similar to a hypothesis, which is made in response to the research question and that is affirmed with evidence based on research.
  • Classification -- ordering of related phenomena into categories, groups, or systems according to characteristics or attributes.
  • Cluster Analysis -- a method of statistical analysis where data that share a common trait are grouped together. The data is collected in a way that allows the data collector to group data according to certain characteristics.
  • Cohort Analysis -- group by group analytic treatment of individuals having a statistical factor in common to each group. Group members share a particular characteristic [e.g., born in a given year] or a common experience [e.g., entering a college at a given time].
  • Confidentiality -- a research condition in which no one except the researcher(s) knows the identities of the participants in a study. It refers to the treatment of information that a participant has disclosed to the researcher in a relationship of trust and with the expectation that it will not be revealed to others in ways that violate the original consent agreement, unless permission is granted by the participant.
  • Confirmability Objectivity -- the findings of the study could be confirmed by another person conducting the same study.
  • Construct -- refers to any of the following: something that exists theoretically but is not directly observable; a concept developed [constructed] for describing relations among phenomena or for other research purposes; or, a theoretical definition in which concepts are defined in terms of other concepts. For example, intelligence cannot be directly observed or measured; it is a construct.
  • Construct Validity -- seeks an agreement between a theoretical concept and a specific measuring device, such as observation.
  • Constructivism -- the idea that reality is socially constructed. It is the view that reality cannot be understood outside of the way humans interact and that the idea that knowledge is constructed, not discovered. Constructivists believe that learning is more active and self-directed than either behaviorism or cognitive theory would postulate.
  • Content Analysis -- the systematic, objective, and quantitative description of the manifest or latent content of print or nonprint communications.
  • Context Sensitivity -- awareness by a qualitative researcher of factors such as values and beliefs that influence cultural behaviors.
  • Control Group -- the group in an experimental design that receives either no treatment or a different treatment from the experimental group. This group can thus be compared to the experimental group.
  • Controlled Experiment -- an experimental design with two or more randomly selected groups [an experimental group and control group] in which the researcher controls or introduces the independent variable and measures the dependent variable at least two times [pre- and post-test measurements].
  • Correlation -- a common statistical analysis, usually abbreviated as r, that measures the degree of relationship between pairs of interval variables in a sample. The range of correlation is from -1.00 to zero to +1.00. Also, a non-cause and effect relationship between two variables.
  • Covariate -- a product of the correlation of two related variables times their standard deviations. Used in true experiments to measure the difference of treatment between them.
  • Credibility -- a researcher's ability to demonstrate that the object of a study is accurately identified and described based on the way in which the study was conducted.
  • Critical Theory -- an evaluative approach to social science research, associated with Germany's neo-Marxist “Frankfurt School,” that aims to criticize as well as analyze society, opposing the political orthodoxy of modern communism. Its goal is to promote human emancipatory forces and to expose ideas and systems that impede them.
  • Data -- factual information [as measurements or statistics] used as a basis for reasoning, discussion, or calculation.
  • Data Mining -- the process of analyzing data from different perspectives and summarizing it into useful information, often to discover patterns and/or systematic relationships among variables.
  • Data Quality -- this is the degree to which the collected data [results of measurement or observation] meet the standards of quality to be considered valid [trustworthy] and  reliable [dependable].
  • Deductive -- a form of reasoning in which conclusions are formulated about particulars from general or universal premises.
  • Dependability -- being able to account for changes in the design of the study and the changing conditions surrounding what was studied.
  • Dependent Variable -- a variable that varies due, at least in part, to the impact of the independent variable. In other words, its value “depends” on the value of the independent variable. For example, in the variables “gender” and “academic major,” academic major is the dependent variable, meaning that your major cannot determine whether you are male or female, but your gender might indirectly lead you to favor one major over another.
  • Deviation -- the distance between the mean and a particular data point in a given distribution.
  • Discourse Community -- a community of scholars and researchers in a given field who respond to and communicate to each other through published articles in the community's journals and presentations at conventions. All members of the discourse community adhere to certain conventions for the presentation of their theories and research.
  • Discrete Variable -- a variable that is measured solely in whole units, such as, gender and number of siblings.
  • Distribution -- the range of values of a particular variable.
  • Effect Size -- the amount of change in a dependent variable that can be attributed to manipulations of the independent variable. A large effect size exists when the value of the dependent variable is strongly influenced by the independent variable. It is the mean difference on a variable between experimental and control groups divided by the standard deviation on that variable of the pooled groups or of the control group alone.
  • Emancipatory Research -- research is conducted on and with people from marginalized groups or communities. It is led by a researcher or research team who is either an indigenous or external insider; is interpreted within intellectual frameworks of that group; and, is conducted largely for the purpose of empowering members of that community and improving services for them. It also engages members of the community as co-constructors or validators of knowledge.
  • Empirical Research -- the process of developing systematized knowledge gained from observations that are formulated to support insights and generalizations about the phenomena being researched.
  • Epistemology -- concerns knowledge construction; asks what constitutes knowledge and how knowledge is validated.
  • Ethnography -- method to study groups and/or cultures over a period of time. The goal of this type of research is to comprehend the particular group/culture through immersion into the culture or group. Research is completed through various methods but, since the researcher is immersed within the group for an extended period of time, more detailed information is usually collected during the research.
  • Expectancy Effect -- any unconscious or conscious cues that convey to the participant in a study how the researcher wants them to respond. Expecting someone to behave in a particular way has been shown to promote the expected behavior. Expectancy effects can be minimized by using standardized interactions with subjects, automated data-gathering methods, and double blind protocols.
  • External Validity -- the extent to which the results of a study are generalizable or transferable.
  • Factor Analysis -- a statistical test that explores relationships among data. The test explores which variables in a data set are most related to each other. In a carefully constructed survey, for example, factor analysis can yield information on patterns of responses, not simply data on a single response. Larger tendencies may then be interpreted, indicating behavior trends rather than simply responses to specific questions.
  • Field Studies -- academic or other investigative studies undertaken in a natural setting, rather than in laboratories, classrooms, or other structured environments.
  • Focus Groups -- small, roundtable discussion groups charged with examining specific topics or problems, including possible options or solutions. Focus groups usually consist of 4-12 participants, guided by moderators to keep the discussion flowing and to collect and report the results.
  • Framework -- the structure and support that may be used as both the launching point and the on-going guidelines for investigating a research problem.
  • Generalizability -- the extent to which research findings and conclusions conducted on a specific study to groups or situations can be applied to the population at large.
  • Grey Literature -- research produced by organizations outside of commercial and academic publishing that publish materials, such as, working papers, research reports, and briefing papers.
  • Grounded Theory -- practice of developing other theories that emerge from observing a group. Theories are grounded in the group's observable experiences, but researchers add their own insight into why those experiences exist.
  • Group Behavior -- behaviors of a group as a whole, as well as the behavior of an individual as influenced by his or her membership in a group.
  • Hypothesis -- a tentative explanation based on theory to predict a causal relationship between variables.
  • Independent Variable -- the conditions of an experiment that are systematically manipulated by the researcher. A variable that is not impacted by the dependent variable, and that itself impacts the dependent variable. In the earlier example of "gender" and "academic major," (see Dependent Variable) gender is the independent variable.
  • Individualism -- a theory or policy having primary regard for the liberty, rights, or independent actions of individuals.
  • Inductive -- a form of reasoning in which a generalized conclusion is formulated from particular instances.
  • Inductive Analysis -- a form of analysis based on inductive reasoning; a researcher using inductive analysis starts with answers, but formulates questions throughout the research process.
  • Insiderness -- a concept in qualitative research that refers to the degree to which a researcher has access to and an understanding of persons, places, or things within a group or community based on being a member of that group or community.
  • Internal Consistency -- the extent to which all questions or items assess the same characteristic, skill, or quality.
  • Internal Validity -- the rigor with which the study was conducted [e.g., the study's design, the care taken to conduct measurements, and decisions concerning what was and was not measured]. It is also the extent to which the designers of a study have taken into account alternative explanations for any causal relationships they explore. In studies that do not explore causal relationships, only the first of these definitions should be considered when assessing internal validity.
  • Life History -- a record of an event/events in a respondent's life told [written down, but increasingly audio or video recorded] by the respondent from his/her own perspective in his/her own words. A life history is different from a "research story" in that it covers a longer time span, perhaps a complete life, or a significant period in a life.
  • Margin of Error -- the permittable or acceptable deviation from the target or a specific value. The allowance for slight error or miscalculation or changing circumstances in a study.
  • Measurement -- process of obtaining a numerical description of the extent to which persons, organizations, or things possess specified characteristics.
  • Meta-Analysis -- an analysis combining the results of several studies that address a set of related hypotheses.
  • Methodology -- a theory or analysis of how research does and should proceed.
  • Methods -- systematic approaches to the conduct of an operation or process. It includes steps of procedure, application of techniques, systems of reasoning or analysis, and the modes of inquiry employed by a discipline.
  • Mixed-Methods -- a research approach that uses two or more methods from both the quantitative and qualitative research categories. It is also referred to as blended methods, combined methods, or methodological triangulation.
  • Modeling -- the creation of a physical or computer analogy to understand a particular phenomenon. Modeling helps in estimating the relative magnitude of various factors involved in a phenomenon. A successful model can be shown to account for unexpected behavior that has been observed, to predict certain behaviors, which can then be tested experimentally, and to demonstrate that a given theory cannot account for certain phenomenon.
  • Models -- representations of objects, principles, processes, or ideas often used for imitation or emulation.
  • Naturalistic Observation -- observation of behaviors and events in natural settings without experimental manipulation or other forms of interference.
  • Norm -- the norm in statistics is the average or usual performance. For example, students usually complete their high school graduation requirements when they are 18 years old. Even though some students graduate when they are younger or older, the norm is that any given student will graduate when he or she is 18 years old.
  • Null Hypothesis -- the proposition, to be tested statistically, that the experimental intervention has "no effect," meaning that the treatment and control groups will not differ as a result of the intervention. Investigators usually hope that the data will demonstrate some effect from the intervention, thus allowing the investigator to reject the null hypothesis.
  • Ontology -- a discipline of philosophy that explores the science of what is, the kinds and structures of objects, properties, events, processes, and relations in every area of reality.
  • Panel Study -- a longitudinal study in which a group of individuals is interviewed at intervals over a period of time.
  • Participant -- individuals whose physiological and/or behavioral characteristics and responses are the object of study in a research project.
  • Peer-Review -- the process in which the author of a book, article, or other type of publication submits his or her work to experts in the field for critical evaluation, usually prior to publication. This is standard procedure in publishing scholarly research.
  • Phenomenology -- a qualitative research approach concerned with understanding certain group behaviors from that group's point of view.
  • Philosophy -- critical examination of the grounds for fundamental beliefs and analysis of the basic concepts, doctrines, or practices that express such beliefs.
  • Phonology -- the study of the ways in which speech sounds form systems and patterns in language.
  • Policy -- governing principles that serve as guidelines or rules for decision making and action in a given area.
  • Policy Analysis -- systematic study of the nature, rationale, cost, impact, effectiveness, implications, etc., of existing or alternative policies, using the theories and methodologies of relevant social science disciplines.
  • Population -- the target group under investigation. The population is the entire set under consideration. Samples are drawn from populations.
  • Position Papers -- statements of official or organizational viewpoints, often recommending a particular course of action or response to a situation.
  • Positivism -- a doctrine in the philosophy of science, positivism argues that science can only deal with observable entities known directly to experience. The positivist aims to construct general laws, or theories, which express relationships between phenomena. Observation and experiment is used to show whether the phenomena fit the theory.
  • Predictive Measurement -- use of tests, inventories, or other measures to determine or estimate future events, conditions, outcomes, or trends.
  • Principal Investigator -- the scientist or scholar with primary responsibility for the design and conduct of a research project.
  • Probability -- the chance that a phenomenon will occur randomly. As a statistical measure, it is shown as p [the "p" factor].
  • Questionnaire -- structured sets of questions on specified subjects that are used to gather information, attitudes, or opinions.
  • Random Sampling -- a process used in research to draw a sample of a population strictly by chance, yielding no discernible pattern beyond chance. Random sampling can be accomplished by first numbering the population, then selecting the sample according to a table of random numbers or using a random-number computer generator. The sample is said to be random because there is no regular or discernible pattern or order. Random sample selection is used under the assumption that sufficiently large samples assigned randomly will exhibit a distribution comparable to that of the population from which the sample is drawn. The random assignment of participants increases the probability that differences observed between participant groups are the result of the experimental intervention.
  • Reliability -- the degree to which a measure yields consistent results. If the measuring instrument [e.g., survey] is reliable, then administering it to similar groups would yield similar results. Reliability is a prerequisite for validity. An unreliable indicator cannot produce trustworthy results.
  • Representative Sample -- sample in which the participants closely match the characteristics of the population, and thus, all segments of the population are represented in the sample. A representative sample allows results to be generalized from the sample to the population.
  • Rigor -- degree to which research methods are scrupulously and meticulously carried out in order to recognize important influences occurring in an experimental study.
  • Sample -- the population researched in a particular study. Usually, attempts are made to select a "sample population" that is considered representative of groups of people to whom results will be generalized or transferred. In studies that use inferential statistics to analyze results or which are designed to be generalizable, sample size is critical, generally the larger the number in the sample, the higher the likelihood of a representative distribution of the population.
  • Sampling Error -- the degree to which the results from the sample deviate from those that would be obtained from the entire population, because of random error in the selection of respondent and the corresponding reduction in reliability.
  • Saturation -- a situation in which data analysis begins to reveal repetition and redundancy and when new data tend to confirm existing findings rather than expand upon them.
  • Semantics -- the relationship between symbols and meaning in a linguistic system. Also, the cuing system that connects what is written in the text to what is stored in the reader's prior knowledge.
  • Social Theories -- theories about the structure, organization, and functioning of human societies.
  • Sociolinguistics -- the study of language in society and, more specifically, the study of language varieties, their functions, and their speakers.
  • Standard Deviation -- a measure of variation that indicates the typical distance between the scores of a distribution and the mean; it is determined by taking the square root of the average of the squared deviations in a given distribution. It can be used to indicate the proportion of data within certain ranges of scale values when the distribution conforms closely to the normal curve.
  • Statistical Analysis -- application of statistical processes and theory to the compilation, presentation, discussion, and interpretation of numerical data.
  • Statistical Bias -- characteristics of an experimental or sampling design, or the mathematical treatment of data, that systematically affects the results of a study so as to produce incorrect, unjustified, or inappropriate inferences or conclusions.
  • Statistical Significance -- the probability that the difference between the outcomes of the control and experimental group are great enough that it is unlikely due solely to chance. The probability that the null hypothesis can be rejected at a predetermined significance level [0.05 or 0.01].
  • Statistical Tests -- researchers use statistical tests to make quantitative decisions about whether a study's data indicate a significant effect from the intervention and allow the researcher to reject the null hypothesis. That is, statistical tests show whether the differences between the outcomes of the control and experimental groups are great enough to be statistically significant. If differences are found to be statistically significant, it means that the probability [likelihood] that these differences occurred solely due to chance is relatively low. Most researchers agree that a significance value of .05 or less [i.e., there is a 95% probability that the differences are real] sufficiently determines significance.
  • Subcultures -- ethnic, regional, economic, or social groups exhibiting characteristic patterns of behavior sufficient to distinguish them from the larger society to which they belong.
  • Testing -- the act of gathering and processing information about individuals' ability, skill, understanding, or knowledge under controlled conditions.
  • Theory -- a general explanation about a specific behavior or set of events that is based on known principles and serves to organize related events in a meaningful way. A theory is not as specific as a hypothesis.
  • Treatment -- the stimulus given to a dependent variable.
  • Trend Samples -- method of sampling different groups of people at different points in time from the same population.
  • Triangulation -- a multi-method or pluralistic approach, using different methods in order to focus on the research topic from different viewpoints and to produce a multi-faceted set of data. Also used to check the validity of findings from any one method.
  • Unit of Analysis -- the basic observable entity or phenomenon being analyzed by a study and for which data are collected in the form of variables.
  • Validity -- the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. A method can be reliable, consistently measuring the same thing, but not valid.
  • Variable -- any characteristic or trait that can vary from one person to another [race, gender, academic major] or for one person over time [age, political beliefs].
  • Weighted Scores -- scores in which the components are modified by different multipliers to reflect their relative importance.
  • White Paper -- an authoritative report that often states the position or philosophy about a social, political, or other subject, or a general explanation of an architecture, framework, or product technology written by a group of researchers. A white paper seeks to contain unbiased information and analysis regarding a business or policy problem that the researchers may be facing.

Elliot, Mark, Fairweather, Ian, Olsen, Wendy Kay, and Pampaka, Maria. A Dictionary of Social Research Methods. Oxford, UK: Oxford University Press, 2016; Free Social Science Dictionary. Socialsciencedictionary.com [2008]. Glossary. Institutional Review Board. Colorado College; Glossary of Key Terms. Writing@CSU. Colorado State University; Glossary A-Z. Education.com; Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Servive Technology. University of Southampton; Miller, Robert L. and Brewer, John D. The A-Z of Social Research: A Dictionary of Key Social Science Research Concepts London: SAGE, 2003; Jupp, Victor. The SAGE Dictionary of Social and Cultural Research Methods . London: Sage, 2006.

  • << Previous: Independent and Dependent Variables
  • Next: 1. Choosing a Research Problem >>
  • Last Updated: Dec 19, 2024 2:30 PM
  • URL: https://libguides.usc.edu/writingguide

ReviseSociology

A level sociology revision – education, families, research methods, crime and deviance and more!

Research Methods – Key Terms for A Level Sociology

Last Updated on January 9, 2019 by Karl Thompson

Definitions of core concepts covered as part of the research methods component of AS and A Level Sociology. Organised in alphabetical order – so effectively this is a research methods A-Z. If this is too much for you, then have a look at my ‘ top ten research methods concepts ‘ first!

For more information about research methods in general please see my main page of links to posts on research methods in sociology !

Research Methods Concepts Sociology

Anthropology – the study of humans, past and present. Historically, anthropologists mostly studied traditional (e.g. tribal) cultures using participant observation as its main method, however, more recently anthropologists have increasingly focused much a greater array of aspects of culture within modern and post-modern societies using a more diverse range of methods. One of the key aims of anthropology is to explore and explain the enormous diversity as well as the commonalities within and between human cultures.

Attrition r ate – the percentage of respondents who drop out of a research study during the course of that study. This can often be a problem with longitudinal research.

Bias – where someone’s personal, subjective feelings or thoughts affect one’s judgement.

Case study – researching a single case or example of something using multiple methods, for example researching one school or factor

Closed Questions – Questions which have a limited range of answers attached to them – such as Yes/ No or Likerhert Scale answers.

Confidentiality – the idea that the information respondents give to the researcher in the research process is kept private. This is usually achieved through anonymity.

Covert research – where the researcher is undercover and respondents do not know they are part of a research study. The opposite of covert research is overt research – where respondents know they are part of a research study.

Dependent and independent variables – a dependent variable is the object under study in an experiment, the independent variables are what the researcher varies to see how they effect the dependent variable.

For example, if you grow tomato plants as a hobby and wanted to find out the effect which the amount of water, the temperature, and the amount of light has on the amount of tomatoes each plant produces you could design a series of experiments in which you varied the amount of light etc. and then measure the effects on the amount of fruit produced. In this example, the amount of tomatoes produced is the dependent variable and the water, the temperature and the amount of light are the independent variables.

Ethnography – an in-depth study of the way of life of a group of people in their natural setting. Ethnographies are typically long-term studies (over several months or even years) and aim for a full (or ‘thick’), multi-layered account of the culture of a group of people. Participant observation is typically the main method used, but researchers will use all other methods available to get even richer data – such as interviews and analysis of any documents associated with that culture.

Ethics/ ethical factors – ethics means taking into consideration how the research impacts on those involved with the research process. Ethical research should gain informed consent, ensure confidentiality, be legal and ensure that respondents and those related to them are not subjected to harm. Ultimately research should aim to do more good than harm to society.

Experiments – experiments aim to measure the effect which one or more independent variables has on a dependent variable. Experiments typically start off with a hypothesis, and a good experiment will be designed in such a way that objective cause and effect relationships can be established between variables, so that the original hypothesis can verified, or rejected and modified.

E xtraneous variables – undesirable variables which are not of interest to the researcher but might interfere with the results of the experiment.

Field d iary – A notebook in which a researcher records observation during the research process. One of the key tools of Participant Observation.

Field experiments – experiments which take place in a real-life setting such as a classroom, the work place or even the high street. See experiments and related terms for a fuller definition.

Focus groups – a type of group interview in which respondents are asked to discuss certain topics.

Formal content analysis – a quantitative approach to analysing mass media content which involves developing a system of classification to analyse the key features of media sources and then simply counting how many times these features occur in a given text.

Going n ative – where a researcher becomes biased or sympathetic towards the group he is studying, such that he or she loses their objectivity.

Group interviews – where an interviewer interviews two or more respondents at a time.

Hawthorne e ffect – where respondents alter their behaviour because they know they are being observed. This is one of the biggest disadvantages of overt laboratory and field experiments.

Hypothesis – a theory or explanation made on the basis of limited evidence as a starting point for further investigation. A hypothesis will typically take the form of a testable statement about the effect which one or more independent variables will have on the dependent variable.

Imposition problem – the imposition problem limits the validity of social surveys. It is where respondents may not be able to express their true feelings about the topic under investigation because the questions (and the range of possible responses) which have been pre-chosen by the researcher limits what they are able to say, and may not reflect the issues that respondents themselves feel are important.

Independent variable – see dependent variable.

Informed consent – where the respondent agrees to take part in a research study with full awareness that research is taking place, what the purpose of the research is and what the researcher intends to do with the results.

Interpretivism – an approach to social research which tries to understand human action through the eyes of those acting. Interpretivists want to know the meanings actors give to their own actions, what their own interpretation of their action is. They thus emphasise respondent-led qualitative methods to achieve insight, in-depth explanations and empathy, in order to realise a humanistic, empathetic understanding from the respondents’ point of view.

Interviews – a method of gathering information by asking questions orally, either face to face or by telephone. Interviews can be individual or group and there are three main types of interview – structured, unstructured and semi-structured.

In terviewer bias – where the values and beliefs of the researcher influence the responses of the interviewee. If an interviewer feels strongly about a subject, then he or she might ask leading questions, or even omit certain questions in order to encourage particular responses from a respondent.

I nterview s chedule – A list of questions or topic areas the interviewer wishes to ask or cover in the course of an interview.

The more structured the interview, the more rigid the interview schedule will be. Before conducting an interview it is usual for the researcher to know something about the topic area and the respondents themselves, and so they will have at least some idea of the questions they are likely to ask: even if they are doing ‘unstructured interviews’ an interviewer will have some kind of interview schedule, even if it is just a list of broad topic areas to discuss, or an opening question.

Laboratory experiments – experiments which take place in an artificial, controlled environment, such as a laboratory. See experiments and related terms for a fuller definition.

Leading q uestions – questions which subtly prompt a respondent to provide a particular answer when interviewed. Leading questions are one way in which interviewer bias can influence the research process, reducing the validity of data collected.

Life documents – written or audio-visual sources created by individuals which record details of that person’s experiences and social actions. They are predominantly qualitative and may offer insights into people’s subjective states. They can be historical or contemporary and can take a wide variety of forms.

Longitudinal studies – a study of a sample of people in which information is collected from the same people at intervals over a long period of time. For example, a researcher might start off in 2015 by getting a sample of 1000 people to fill in a questionnaire, and then go back to the same people in 2020, and again in 2025 to collect further information.

Likert scale – used to measure strength of opinion or feeling about a statement in social surveys. For example respondents might be asked whether they strongly agree, agree, disagree, or strongly disagree with a particular statement.

Multistage sampling – w ith multistage sampling, a researcher selects a sample by using combinations of different sampling methods. For example, in Stage one , a researcher might use systematic sampling, and in Stage two , he might use random sampling to select a subset for the final sample.

research methods key concepts

Non-participant o bservation – where the researcher observes a group without taking part with that group. This method can either be overt or covert, and data may be recorded quantitatively or qualitatively. Probably the most commonly experienced example of non-participant observation is the OFSTED inspection.

Objective knowledge – knowledge which is free of the biases, opinions and values of the researcher, it reflects what is really ‘out there’ in the social world.

While most sociologists believe that we should strive to make our data collection as objective as possible, there are some sociologists (known as phenomenologists) who argue that it is not actually possible to collect data which is purely objective – the researcher’s opinions always get in the way of what data is collected and filtered for publication.

Official s tatistics – numerical information collected and used by the government and its agencies to make decisions about society and the economy. Examples include the UK National Census, police recorded crime and data on educational achievement.

Open-ended question – questions for which there are no set answers. Open questions allow individuals to write their own answers or dictate them to interviewers. For example ‘have you enjoyed studying Sociology this year?’

O perationalising concepts – the process of defining a concept precisely so that it can be easily understood by respondents and measured by the researcher. The term may also be applied to the process of determining variables in experiments.

For example, rather than ask a respondent ‘are you religious’, which is a vague question with many interpretations, a researcher might operationalise the concept of religion by using a range of more precise questions such as ‘do you believe in God’, ‘do you believe in the idea of heaven and hell’, ‘ how often do you pray’, and so on.

Overt research – see covert research.

Participant o bservation – involves the researcher joining a group of people, and taking an active part in their day to day lives as a member of that group and making in-depth recordings of what she sees.

Participant Observation may be overt, in which case the respondents know that researcher is conducing sociological research, or covert (undercover) where the respondents are deceived into thinking the researcher is ‘one of them’ and do not know the researcher is conducting research.

Personal documents – first-hand accounts of social events and personal experiences, which generally include the writer’s feelings and attitudes about the events they think are personally significant. Examples of personal documents are letters, diaries, photo albums and autobiographies.

Pilot s tudy – a test study carried out before the main research study and on a smaller scale, to uncover and iron potential problems which may occur in the main programme of research.

Positivism – an approach to social research which aims to be as close to the natural sciences as possible. Positivists emphasise the use of quantitative data in order to remain detached from the research process and to uncover social trends and correlations which are generaliseable to society as a whole. Their ultimate aim is to uncover the objective social laws which govern human action.

Practical factors – include such things as the amount of time the research will take, how much it will cost, whether you can achieve funding, opportunities for research including ease of access to respondents, and the personal skills and characteristics of the researcher.

Pre-coded, or closed questions – questions where the respondent has to choose from a limited range of responses. Two of the most common types of closed question are the simply yes/no questionnaire and the Likehert Scale (a strength of feeling scale).

Primary data – data collected first hand by the researcher herself. If a sociologist is conducting her own unique sociological research, she will normally have specific research questions she wants answered and thus tailor her research methods to get the data she wants. The main methods sociologists use to generate primary data include social surveys (normally using questionnaire), interviews, experiments and observations.

Public documents – are produced by organisations such as government departments and their agencies as well as businesses and charities and include OFSTED and other official government enquiries. These reports are a matter of public record and should be available for anyone who wishes to see them.

Qualitative data – refers to information that appears in written, visual or audio form, such as transcripts of interviews, newspapers and web sites. (It is possible to analyse qualitative data and display features of it numerically).

Quantitative data – refers to information that appears in numerical form, or in the form of statistics.

Quota sampling – In this method researchers will be told to ensure the sample fits with certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed. The researcher might then find these 30 by going to a job centre. The problem of representativeness is again a problem with the quota sampling method.

Random sampling – i n random sampling everyone in the population has the same chance of getting chosen. A simple example of random sampling would be picking names out of a hat.

Rapport – a close and harmonious relationship between researcher and respondents, such that both parties understand each other’s feelings and communicate well.

Reliability – i f research is reliable, it means if someone else repeats the same research with the same population then they should achieve the same results.

In order to be reliable, research needs to be easily repeatable. Self-completion questionnaires have high reliability because it is easy for another researcher to administer the questionnaire again. More in depth methods such as participant observation, where the researcher can spend several months or even years with a small group of respondents are not very reliable as it is impossible to replicate the exact procedures of the original research. More qualitative methods also open up the possibility for the researcher to get more involved with the research process, with further detracts from the reliability.

Representativeness – r esearch is representative if the research sample reflects the characteristics of the wider target population that is being studied.

R epresentative ness thus depends on who is being studied. If one’s research aim is to look at the experiences of all white male AS Sociology students studying sociology, then one’s sample should consist of all white, male sociology students. If one wishes to study sociology students in general, one will need to have a proportionate amount of AS/ A2 students as well as a range of genders and ethnicities in order to reflect the wider student body.

R esearch sample – the actual population selected for the research – also known as the respondents.

Sampling – the process of selection a section of the population to take part in social research.

S ampling f rame – a list from which a sample will be drawn.

Secondary data – data that has been collected by previous researchers or organisations such as the government. Quantitative sources of secondary data include official government statistics and qualitative sources are very numerous including government reports, newspapers, personal documents such as diaries as well as the staggering amount of audio-visual content available online

Self-Selecting Sample Bias – where individuals choose whether they take part in the research and the results end up being unrepresentative because certain types of people are more willing or able do participate in the research.

Semi- s tructured interviews – those in which res earchers have a pre-determined list of questions to ask respondents , but are free to ask further, differentiated questions based on the responses given.

Snowball sampling – w ith this method, researchers might find a few participants, and then ask them to find participants themselves and so on.

Social s urveys – typically questionnaires designed to collect information from large numbers of people in standardised form.

Social surveys are written in advance by the researcher and tend to to be pre-coded and have a limited number of closed-questions and they tend to focus on relatively simple topics. A good example is the UK National Census. Social surveys can be administered (carried out) in a number of different ways – they might be self-completion (completed by the respondents themselves) or they might take the form of a structured interview on the high street, as is the case with some market research.

S ocially constructed – Interpretivists argue that official statistics are socially constructed – that is they are the result of the subjective decisions made by the people who collect them rather than reflecting the objective underlying reality of social life. For example Crime Statistics do not reflect the actual crime rate, only those activities which are defined as crimes by the people who notice them and who then go on to report those activities to the police.

Stratified sampling – this method attempts to make the sample as representative as possible, avoiding the problems that could be caused by using a completely random sample. To do this the sample frame will be divided into a number of smaller groups, such as social class, age, gender, ethnicity etc. Individuals are then drawn at random from these groups. If you are observing doctors and you had split the sample frame into ethnic groups you would draw 8% of the participants from the Asian group, as you know that 8% of doctors in Britain are Asian.

Structured or formal interviews – those in which the interviewer asks the interviewee the same questions in the same way to different respondents. This will typically involve reading out questions from a pre-written and pre-coded structured questionnaire.

Subjective knowledge – knowledge based purely on the opinions of the individual, reflecting their values and biases, their point of view. See also ‘objective knowledge’.

Systematic sampling – an example of a systematic sample would be picking every 10th person on a list or register. This carries a similar risk of being unrepresentative as random sampling as, for example, every 10th person could be a girl.

Target p opulation – all people who could potentially be studied as part of the research.

Textual a nalysis – involves examining how different words are linked together in order to encourage readers to adopt a particular view of what is being reported.

Textual analysis also involves the use of semiology – which is the analysis of signs and symbols.

Thematic a nalysis – involves trying to understand the intentions which lie behind the production of mass media documents by subjecting a particular area of reportage to detailed investigation.

Theoretical factors – validity, reliability, representativeness and whether research is being carried out from a Positivist or Interpretivist point of view.

Positivists prefer quantitative research methods and are generally more concerned with reliability and representativeness. Interpretivists prefer qualitative research methods and are prepared to sacrifice reliability and representativeness to gain deeper insight which should provide higher validity.

Transcription – the process of writing down (or typing up) what respondents say in an interview. In order to be able to transcribe effectively interviews will need to be recorded.

Triangulation – the use of more than one method in social research. For example a researcher might combine structured questionnaires with more in-depth interviews. Triangulation is often used to verify the validity of other data sources and is a good way of improving the reliability of research.

Unstructured i nterviews – also known as informal interviews, are more like a guided conversation, and typically involve the researcher asking open-questions which generate qualitative data. The researcher will start with a general research topic in and ask questions in response to the various and differentiated responses the respondents give. Unstructured Interviews are thus a flexible, respondent-led research method.

Validity – r esearch is valid if it provides a true picture of what is really ‘out there’ in the world.

Generally speaking, the more in depth the research, the fuller picture we get of the thoughts and feelings of the individuals acting, so the more valid the data; and the more the researcher stands back and allows the respondents to ‘speak for themselves’ the more valid the data. In more quantitative research, such as social surveys, validity may be lacking because the researcher has decided on what questions should be answered by respondents, rather than letting the respondents decide on what they want to say for themselves, as is typically the case with more qualitative methods.

Value Freedom – where a researcher’s personal opinions, beliefs and feelings are kept out the research process so that data collected is not influenced by the personal biases of the researcher.

Verstehen – a German word meaning to ‘understand in a deep way’ – in order to achieve ‘Verstehen’ a researcher aims to understand another person’s experience by putting themselves in the other person’s shoes.

Interpretivists argue that to achiev e Verstehen (or empathetic understanding) we should use in-depth qualitative research such as participant observation.

Share this:

  • Share on Tumblr

2 thoughts on “Research Methods – Key Terms for A Level Sociology”

  • Pingback: Positivism and Interpretivism in Social Research | ReviseSociology
  • Pingback: Research Methods – Top Ten Key Terms for A Level Sociology Students | ReviseSociology

Leave a Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Discover more from ReviseSociology

Subscribe now to keep reading and get access to the full archive.

Continue reading

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 1: Introduction to Research Methods

1.4 Understanding Key Research Concepts and Terms

In this textbook you will be exposed to many terms and concepts associated with research methods, particularly as they relate to the research planning decisions you must make along the way. Figure 1.3 will help you contextualize many of these terms and understand the research process. This general chart begins with two key concepts: ontology and epistemology, advances through other concepts, and concludes with three research methodological approaches: qualitative, quantitative and mixed methods.

Research does not end with making decisions about the type of methods you will use; we could argue that the work is just beginning at this point. Figure 1.3 does not represent an all-encompassing list of concepts and terms related to research methods. Keep in mind that each strategy has its own data collection and analysis approaches associated with the various methodological approaches you choose. Figure 1.3 is intentioned to provide a general overview of the research concept. You may want to keep this figure handy as you read through the various chapters.

key terms research methods

Ontology & Epistemology

Thinking about what you know and how you know what you know involves questions of ontology and epistemology. Perhaps you have heard these concepts before in a philosophy class? These concepts are relevant to the work of sociologists as well. As sociologists (those who undertake socially-focused research), we want to understand some aspect of our social world. Usually, we are not starting with zero knowledge. In fact, we usually start with some understanding of three concepts: 1) what is; 2) what can be known about what is; and, 3) what the best mechanism happens to be for learning about what is (Saylor Academy, 2012). In the following sections, we will define these concepts and provide an example of the terms, ontology and epistemology.

Ontology is a Greek word that means the study, theory, or science of being. Ontology is concerned with the what is or the nature of reality (Saunders, Lewis, & Thornhill, 2009). It can involve some very large and difficult to answer questions, such as:

  • What is the purpose of life?
  • What, if anything, exists beyond our universe?
  • What categories does it belong to?
  • Is there such a thing as objective reality?
  • What does the verb “to be” mean?

Ontology is comprised of two aspects: objectivism and subjectivism. Objectivism means that social entities exist externally to the social actors who are concerned with their existence. Subjectivism means that social phenomena are created from the perceptions and actions of the social actors who are concerned with their existence (Saunders, et al., 2009). The table below provides an example of a similar research project to be undertaken by two different students. While the projects being proposed by the students are similar, they each have different research questions. Read the scenario and then answer the questions that follow.

Subjectivist and objectivist approaches (adapted from Saunders et al., 2009)

Ana is an Emergency & Security Management Studies (ESMS) student at a local college. She is just beginning her capstone research project and she plans to do research at the City of Vancouver. Her research question is: What is the role of City of Vancouver managers in the Emergency Management Department (EMD) in enabling positive community relationships? She will be collecting data related to the roles and duties of managers in enabling positive community relationships.

Robert is also an ESMS student at the same college. He, too, will be undertaking his research at the City of Vancouver. His research question is: What is the effect of the City of Vancouver’s corporate culture in enabling EMD managers to develop a positive relationship with the local community? He will be collecting data related to perceptions of corporate culture and its effect on enabling positive community-emergency management department relationships.

Before the students begin collecting data, they learn that six months ago, the long-time emergency department manager and assistance manager both retired. They have been replaced by two senior staff managers who have Bachelor’s degrees in Emergency Services Management. These new managers are considered more up-to-date and knowledgeable on emergency services management, given their specialized academic training and practical on-the-job work experience in this department. The new managers have essentially the same job duties and operate under the same procedures as the managers they replaced. When Ana and Robert approach the managers to ask them to participate in their separate studies, the new managers state that they are just new on the job and probably cannot answer the research questions; they decline to participate. Ana and Robert are worried that they will need to start all over again with a new research project. They return to their supervisors to get their opinions on what they should do.

Before reading about their supervisors’ responses, answer the following questions:

  • Is Ana’s research question indicative of an objectivist or a subjectivist approach?
  • Is Robert’s research question indicative of an objectivist or a subjectivist approach?
  • Given your answer in question 1, which managers could Ana interview (new, old, or both) for her research study? Why?
  • Given your answer in question 2, which managers could Robert interview (new, old, or both) for his research study? Why?

Ana’s supervisor tells her that her research question is set up for an objectivist approach. Her supervisor tells her that in her study the social entity (the City) exists in reality external to the social actors (the managers), i.e., there is a formal management structure at the City that has largely remained unchanged since the old managers left and the new ones started. The procedures remain the same regardless of whoever occupies those positions. As such, Ana, using an objectivist approach, could state that the new managers have job descriptions which describe their duties and that they are a part of a formal structure with a hierarchy of people reporting to them and to whom they report. She could further state that this hierarchy, which is unique to this organization, also resembles hierarchies found in other similar organizations. As such, she can argue that the new managers will be able to speak about the role they play in enabling positive community relationships. Their answers would likely be no different than those of the old managers, because the management structure and the procedures remain the same. Therefore, she could go back to the new managers and ask them to participate in her research study.

Robert’s supervisor tells him that his research is set up for a subjectivist approach. In his study, the social phenomena (the effect of corporate culture on the relationship with the community) is created from the perceptions and consequent actions of the social actors (the managers); i.e., the corporate culture at the City continually influences the process of social interaction, and these interactions influence perceptions of the relationship with the community. The relationship is in a constant state of revision. As such, Robert, using a subjectivist approach, could state that the new managers may have had few interactions with the community members to date and therefore may not be fully cognizant of how the corporate culture affects the department’s relationship with the community. While it would be important to get the new managers’ perceptions, he would also need to speak with the previous managers to get their perceptions from the time they were employed in their positions. This is because the community-department relationship is in a state of constant revision, which is influenced by the various managers’ perceptions of the corporate culture and its effect on their ability to form positive community relationships. Therefore, he could go back to the current managers and ask them to participate in his study, and also ask that the department please contact the previous managers to see if they would be willing to participate in his study.

As you can see the research question of each study guides the decision as to whether the researcher should take a subjective or an objective ontological approach. This decision, in turn, guides their approach to the research study, including whom they should interview.

Epistemology

Epistemology has to do with knowledge. Rather than dealing with questions about what is, epistemology deals with questions of how we know what is.  In sociology, there are many ways to uncover knowledge. We might interview people to understand public opinion about a topic, or perhaps observe them in their natural environment. We could avoid face-to-face interaction altogether by mailing people surveys to complete on their own or by reading people’s opinions in newspaper editorials. Each method of data collection comes with its own set of epistemological assumptions about how to find things out (Saylor Academy, 2012). There are two main subsections of epistemology: positivist and interpretivist philosophies. We will examine these philosophies or paradigms in the following sections.

Research Methods for the Social Sciences: An Introduction Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

FIU Libraries Logo

  •   LibGuides
  •   A-Z List
  •   Help

Research Methods Help Guide

Definitions you need to know.

  • Types of Data
  • Types of Research
  • Types of Studies
  • Helpful Resources
  • Get Help @ FIU

More Information

  • Glossary of Key Terms Glossary of research methods terms provided by the Writing Studio at Colorado State University.
  • Glossary of Statistical Terms Extensive and in-depth list of statistical terms provided by the University of California Berkeley.
  • Statistical Definitions Answers to statistics study questions that provide many concise definitions of statistical concepts.
  • Statistics Glossary Created by Valerie J. Easton and John H. McColl of the University of Glasgow. Contains many other definitions not included in this LibGuide.

Constant: a fixed value. Not a variabl e .

Variable: a value or characteristic that differs among individuals. It can be described, counted, or measured.

Independent Variable: the variable researchers manipulate in an experiment. Affects the dependent variable(s) .

Dependent Variable: the variable affected by the independent variable in an experiment.

  • Types of Variables

Sample: a smaller group selected from a larger group (the population ). Researchers study samples to draw conclusions about the larger group.

Population: the entire collection of people, animals, plants, things, etc. researchers wish to examine. Since populations are often too large to include every member in a study, researchers work with samples to describe or draw conclusions about the entire population group using statistical tests.

  • More Information on Samples, Sampling, and Populations Scroll down to the "Populations and Samples" section.

Participant: an individual who participates in a study. This is a more recent term. (Used with human research.)

Subject: another way to describe a participant. This is a more traditional term. (Used with human and animal research.)

Attrition: loss of participants/subjects in a study.

Reliability: the extent to which a measure or tool yields consistent results on repeated trials.

Validity: the degree to which a study accurately assesses what it is attempting to assess.

  • Types of Reliability and Validity
  • << Previous: Welcome
  • Next: Types of Data >>
  • Last Updated: Dec 10, 2024 1:27 PM
  • URL: https://library.fiu.edu/researchmethods

Information

Fiu libraries floorplans, green library, modesto a. maidique campus, hubert library, biscayne bay campus.

Federal Depository Library Program logo

Directions: Green Library, MMC

Directions: Hubert Library, BBC

Miss Smith Has Got Your Back!

Research Methods: Key Terms Guide

Blank-key-terms-guide

Objective: An  objective  statement is based on facts and observations. Objective  information is provable, measurable and observable

Subjective: relies on assumptions, beliefs, opinions and influenced by emotions and personal feelings

Cross-sectional study: A cross sectional study takes one moment in time and compares one group of participants with another group of participant at that time. Participants are only tested once and the findings provide a snapshot of the differences between the behaviour of the two groups tested. It is therefore similar to an independent measures design. Cross sectional studies are often used to look at the effect of age as an independent variable on certain key behaviours or abilities. For example, you might compare 3 year olds and 6 year olds with regard to the amount of aggressive behaviours shown towards their same sex parents.

Ethics: Ethics are a moral code that psychologists follow in order to protect both the participants and researchers

Share this:

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Research Conversations

key terms research methods

Glossary of key terms

A brief introduction to the information included in academic research publications and to the different kinds of academic research

The format of the research publication

Research publications usually include the following information: 

A summary of the whole publication. In an academic journal article, abstracts are usually between 100 – 300 words long. 

Background / Introduction / Literature

The first section of an academic article is typically a summary of the existing research and context. Usually explains a “problem” or identifies a “gap” that the publication responds to. 

How the research was done (e.g. observations, audio recordings, interviews). 

Methodology

How the methods were used: for instance, the number of participants (which might range between thousands of participants to just one or two); if the research was an interview or survey, what the questions were; what the researcher was looking for if it was an observation; and the kinds of ethical approval the researcher had to get.

Data / Findings / Results

A representation of part of what happened during the research. This might be numerical data, produced through tests completed before and after an intervention. Alternatively, it might be field notes, interview transcripts, or written observations. It might also be a reflective diary entries, or something produced with or by participants (e.g., scrapbooks, podcasts, maps). Sometimes combined with analysis.

Sometimes presented alongside Data/findings/results. An attempt to “explain” how the data relates to findings of other research and/or the theory or conceptual framework used.

Conclusion or implications

Publications usually end with a summary of the whole publication, as well as some statements of the implications for future research or practice. 

Research paradigm / theory / conceptual framework

A conceptual framework informs the methodology and/or analysis. Using one conceptual framework will give different insights than using another would. For instance, if a researcher uses critical race theory as a conceptual framework, they assume that racism exists and that it will be evident in their data: such research also typically sets out to make emancipatory changes in the world. On the other hand, if a researcher was to use posthumanism, they would assume that whatever is being researched is being constantly influenced by the process of doing research: whatever happens during the research wouldn’t have happened otherwise and so the methods and discussion have to account for this. Both theories also assume that what the research has found is tentative and likely only ‘true’ for that particular study.

There are lots of different conceptual frameworks, and these will often be explained in the publication. If the conceptual framework is not explicitly stated, it’s usually because the researcher assumes that the work is both politically neutral and objective. For instance, randomised control trials typically assume that the findins are objective, that what has been learnt can be generalised to all times and places without interpretation, and that the researcher is separate from the thing they’re researching.

Positionality

Positionality statements are related to the conceptual framework. They express how the researchers’ lived experience influences their understanding of and access to the research topic. For instance, a black woman researching anti-black racism directed towards girls would produce different questions, data, and insights than a white man researching the same topic. Sometimes, positionality statements might be included in the methodology. Other times, positionality statements might be included their own section. As with conceptual frameworks, positionality statements are not usually included in research such as randomised control trials, which tend to assume that their findings are objective, politically neutral, and generalisable.

Kinds of research

Quantitative/qualitative/mixed-methods

A description of the kind of data collected. This might consist of numerical data ( quantitative ), rich descriptions that cannot be represented as numerical data ( qualitative ), or a combination of the two ( mixed methods ). A small but increasing number of studies might describe themselves as post qualitative : these studies include unusually extensive use of critical or philosophical theory and typically use less traditional methods.

Reviews, or literature reviews, aim to collect and summarise existing research publications on a specific topic. Reviews serve different purposes and can be conducted in several different ways.

Systematic reviews aim to create an exhaustive review of all the research on a specific topic. They usually rely on very specific search terms and ‘exclusion criteria’ (i.e., justifications as to why some articles are included in the review while other, seemingly appropriate articles, are not). Systematic reviews also take pains to be ‘replicable,’ whereby another researcher should be able to locate and include the same studies given the same parameters.

Cremin, Teresa and Oliver, Lucy (2017). Teachers as writers: a systematic review. Research Papers in Education, 32(3) pp. 269-295. https://doi.org/10.1080/02671522.2016.1187664

Meta-analysis   or synthesis are often combined with systematic reviews. They attempt to draw together the findings of multiple quantitative studies identified through a systematic review. These often also draw from randomised control trials. Like systematic reviews, meta-analyses tend to emphasise replicability: they go one step further than systematic reviews in assuming that another researcher, given the same parameters would both locate and include the same studies, but also read across and synthesise the studies in the same ways.

Slavin, E.R., Lake, C., Inns, A., Baye, A., Dachet, D., Haslam, J. (2019). A Quantitative Synthesis of Research on Writing Approaches in Years 3 to 13. London: Education Endowment Foundation. The report is available from: https://educationendowmentfoundation.org.uk/public/files/Writing_Approaches_in_Years_3_to_13_Evidence_Review.pdf

Scoping reviews  aim to explore all the key topics or themes in a given field, rather than give a comprehensive review of every individual article.

Burnett, C. (2022). Scoping the field of literacy research: how might a range of research be valuable to primary teachers? (Working paper.) http://doi.org/10.7190/shu-working-papers/2201

Narrative reviews combine a review of the literature with discussion and critique: the aim here is not to create an exhaustive summary of the field, but instead to make an argument (for instance, to identify a direction for future research or explore an under-researched topic).

Types of empirical research

While reviews are based on existing publications, empirical research is a term for research based on research data. This data might have been collected by the researcher for the purposes of a particular study, or based on data collected by somebody else (i.e., secondary data). There are lots of different ways to collect research data: too many to summarise here. Below, we trace some of the possibilities.

Randomised control trials (RCTs) aim to measure the effectiveness of an intervention, in such a way that controls and limits the influences of any factors not being measured. RCTs evolved out of the medical sciences, where researchers need to isolate the effects and side effects of a particular treatment from other external factors (for instance, to ensure that a participants’ age or a sporting injury doesn’t affect the success rate of a new flu vaccine being trialled). Similarly, randomised control trials in educational research try to test the effectiveness of an intervention through limiting the influences of external factors. They usually adopt an experimental approach, wherein at least one group of participants does not access the intervention. The results of randomised control trials are often assumed to be replicable because they have screened out external factors that might have otherwise complicated the data: in other words, they aim to test the impact of the intervention and only the intervention. Consequently, they are highly valued by some research agencies.

Rather than prioritise replicability, other researchers argue that educational research is never truly replicable because classrooms are too different from one another. Instead, they aim to account for the specific research context in detail so that practitioners and other researchers can make informed decisions about the applicability of the findings to their own context or project. For instance, action research sets out to assess the impact of an intervention but is often conducted by practitioners in their own classrooms as part of a reflective cycle. The intervention is constantly tweaked and updated based on earlier results. Consequently, the findings are quite specific to an individual classroom, albeit with implications for other practitioners and researchers.

Reflective, practitioner research is sometimes conducted with more than one teacher, supported by an external ‘expert’ who supports the reflective process: this is called lesson study . Sometimes, the intervention is tweaked over a number of discrete phases in response to earlier findings rather than continuously tweaked: this is called design-based research . Like action research, researchers who use design-based research, or lesson study do not typically strive to make their work generalisable: instead, they aim to contextualise the work in detail so that other researchers or practitioners can make decisions about the applicability of the findings to their own work or context.

Not all educational research tests an intervention. For instance, ethnography requires that the researcher immerse themselves in a research context or community for a sustained period of time. Unlike RCTs, or action research, which test an intervention or pedagogical practice, ethnographic researchers aim to find out about ‘real life’ as it is lived. Consequently, it might not always be possible to determine in advance what ethnographic research will be about. Sometimes, ethnography might involve reflecting on one’s own thoughts or experiences, such as in auto-ethnography . Ethnographers might also specifically attend to sensory experiences, such as smells, or tastes, such as in sensory ethnography .  There are also ethnographic practices that focus specifically on digital spaces (e.g., social media) network ethnography and digital ethnography .

In co-produced or participatory research, the research deliberately sets out with the intention that the research questions and methods will be, at least in part, decided with the participants being researched. This also has the effect of sharing power more evenly with the participants. In “Anti-Racist Scholar-Activism,” Joseph-Salisbury and Connelly (2021) argue for the importance of what they term “being there”: of not determining research questions before beginning the research project, but instead of identifying research priorities while immersed in communities.

In other kinds of research, the researcher might carry out a series of activities with the research participants. This might include arts-based research or research-creation ,in which the researcher (sometimes supported by an artist) create something together. They might be decided in advance by the researcher, or co-produced or participatory , with choices about the research decided with participants. Sometimes arts-based research is paired with more traditional methods, such as interviews or observational fieldnotes. Other times, the art itself is the research data, and this is analysed and discussed in much the same way as any other data would be.

IMAGES

  1. Research Methods

    key terms research methods

  2. Glossary of Research Methods key terms (OCR GCSE Psychology)

    key terms research methods

  3. PPT

    key terms research methods

  4. Key Terms

    key terms research methods

  5. 15 Types of Research Methods (2024)

    key terms research methods

  6. GCSE Research methods key-terms list

    key terms research methods

COMMENTS

  1. Research Methods Key Term Glossary - tutor2u

    Mar 22, 2021 · This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology. Don't forget to also make full use of our research methods study notes and revision quizzes to support your studies and exam revision.

  2. Glossary of key research terms - McHenry County College

    A research condition in which no one, including the researcher, knows the identities of research participants. ANOVA (Analysis of Variance) A method of statistical analysis broadly applicable to a number of research designs, used to determine differences among the means of two or more groups on a variable.

  3. Glossary of Research Terms - Research Guides at University of ...

    5 days ago · Colorado State University; Glossary A-Z. Education.com; Glossary of Research Terms. Research Mindedness Virtual Learning Resource. Centre for Human Servive Technology. University of Southampton; Miller, Robert L. and Brewer, John D. The A-Z of Social Research: A Dictionary of Key Social Science Research Concepts London: SAGE, 2003; Jupp, Victor.

  4. GLOSSARY OF KEY TERMS IN EDUCATIONAL RESEARCH

    research terminologies in educational research. It provides definitions of many of the terms used in the guidebooks to conducting qualitative, quantitative, and mixed methods of research. The terms are arranged in alphabetical order. Abstract A brief summary of a research project and its findings. A summary of a study that

  5. Research Methods – Key Terms for A Level Sociology

    Aug 2, 2016 · Definitions of core concepts covered as part of the research methods component of AS and A Level Sociology. Organised in alphabetical order - so effectively this is a research methods A-Z. If this is too much for you, then have a look at my 'top ten research methods concepts' first! For more information about research

  6. STUDY SKILLS: GLOSSARY OF COMMON TERMS Research TErms

    Research TErms 1 | P a g e Study Skills at NUA (2019): [email protected] STUDY SKILLS: GLOSSARY OF COMMON TERMS Abstract A paragraph of usually no more than 250 words that provides a summary to a research report. It should cover the topic, methods and results. Appendix (singular) or Appendices (plural)

  7. 1.4 Understanding Key Research Concepts and Terms

    1.4 Understanding Key Research Concepts and Terms In this textbook you will be exposed to many terms and concepts associated with research methods, particularly as they relate to the research planning decisions you must make along the way. Figure 1.3 will help you contextualize many of these terms and understand the research process.

  8. FIU Libraries: Research Methods Help Guide: Quick Glossary

    Dec 10, 2024 · This is a more recent term. (Used with human research.) Subject: another way to describe a participant. This is a more traditional term. (Used with human and animal research.) Attrition: loss of participants/subjects in a study.

  9. Research Methods: Key Terms Guide - Miss Smith Has Got Your Back!

    Blank-key-terms-guide Key term Definition Aim An aim identifies the purpose of the investigation. Hypothesis A hypothesis (plural hypotheses) is a precise, testable statement of what the researchers predict/s will be the outcome of the study.

  10. Glossary of key terms – Research Conversations

    In co-produced or participatory research, the research deliberately sets out with the intention that the research questions and methods will be, at least in part, decided with the participants being researched. This also has the effect of sharing power more evenly with the participants.