Study your flashcards anywhere!

Download the official Cram app for free >

  • Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key


Play button


Play button




Click to flip

74 Cards in this Set

  • Front
  • Back

What type of research is conducted in most disciplines?

experimental research.

It is virtually the only type of research performed in the sciences.

why is experimental research conducted?

to increase the body of knowledge in a discipline and to suggest what procedures should be followed in the future.

What does experimental research always involve?

manipulation of the experimental unit.

what is the purpose of experimental research?

investigate the cause-and-effect relationship by subjecting experimental groups to treatment conditions and comparing the results to control groups not receiving the treatment.

List the 14 systematic stages used in experimental research.

1. state research problem

2. determine if the experimental approach is appropriate

3. specify the independent variable(s) and the levels of the independent variable(s).

4. specify all the potential dependent variables. Dependent variables

5. state the tentative hypothesis

6. determine the availability of measures for the potential dependent variables

7. pause to consider the success potential of the research

8. ID the full potential of intervening variables

9. make a formal statement of the research hypothesis

10. design the experiment

11. make a final estimate of the success potential of the study

12. conduct the study as planned in number 1 and 11.

13. analyze the data according to the data analysis plan

14. prepare a research report.

define internal validity

validity of the findings within or internal to the research study.

define external validity

validity of generalizing the findings in a research study to other groups and situations.

list two classifications of validity

1. internal validity

2. external validity

what type of validity is concerned with whether the findings for the sample of participants in the study can be inferred to the population they represent and to other populations?

external validity

list the types of threats to internal validity

1. History

2. Maturation

3. Testing

4. Instrumentation

5. Statistical regression

6. Selection

7. Experimental Mortality

8. Interaction of Selection and Maturation or History

list the types of threats to external validity

1. Interaction Effect of testing

2. Interaction Effects of Selection Bias and Experimental Treatment

3. Reactive Effects of Experimental Setting

4. Multiple-Treatment Interference

define the meaning of history in internal validity

refers to specific things that happen while conducting the research study that affect the final scores of the participants in addition to the effect of the experimental treatment.

ex. participant workout out outside of the experimental treatment and study could effect final results.

define maturation in internal validity

because participants grow older during the experimental period, performance levels change.

define testing in internal validity

the act of taking a test can affect the scores of the participants on a second or later testing. Participants may do better on a posttest because they learn from the first test.

define instrumentation in internal validity

changes in adjustment or calibration of the measuring equipment or use of different standards among scorers may cause differences among groups in final score.

check accuracy of measuring equipment routinely, and make sure standard scoring procedure is used. Testing and score procedures must be held constant.

define statistical regression in internal validity.

the tendency for groups with extremely high or low scores on one measure to score closer to the mean score of the population on a second measure.

what can eliminate the threat of statistical regression?

random sampling

define selection in terms of internal validity.

the way that participants were selected or assigned to groups can be biased.

what controls for the selection threat in internal validity?

random selection or participants and random assignment of participants to groups.

define experimental mortality in internal validity

created with excessive loss of participants so that experimental groups are no longer representative of a population or similar to each other.

when groups become different in size, it is a concern.

define internal selection and maturation or History in internal validity.

the maturation effect or history effect is not the same for all groups selected for the research study, and this influences final scores.

what are the threats to external validity?

1. Interaction Effect of Testing

2. Interaction Effects of Selection Bias and Experimental Treatment

3. Reactive Effects of Experimental Setting

4. Multiple-Treatment Interference

define interaction effect of testing in external validity

occurs when the pretest changes the groups response to the experimental treatment, thus making the group unrepresentative of any particular population.

define interaction effects of selection bias and experimental treatment in external validity.

participants or groups selected in biased manner react to the experimental treatment in a unique way so they are not representative of any particular population.

can easily occur when convenience is used.

define reactive effects of experimental treatment.

the experimental setting is such that the experimental treatment has a unique effect on the participants or groups that would not be observed in some other setting.


- participants react to researcher in unique manner

- conducting research in a lab, rather than natural setting

define multiple-treatment interference in external validity

the effect of prior treatments on the response of the participants or groups to a present treatment.

how should the multiple-treatment interference threat be controled?

researchers should check on the background and experiences of potential participants to control this threat to external validity.

define designs

the ways a research study may be conducted.

what types of experimental designs are listed in the book?

1. Preexperimental design

2. true experimental design

3. Quasi-experimental design

define pre-experimental design

designs that have poor control often due to no random sampling.

define true experimental design

the best type of design because there is good control with sufficient random sampling.

define control goup

in a research study, the group which received no treatment which should change its ability.

define quasi-experimental design

an acceptable design but with some loss of control due to lack of random sampling.

what is a better design? preexperimental or quasi-experimental design?


what is an example of a quasi-experimental design?

the nonequivalent control group design. Like pretest/posttest control group design but except participants are not assigned to groups by using random sampling.

What does a quasi-experimental control for?

threats to validity of history, maturation, testing, instrumentation, selection, and experimental mortality.

what threats does a quasi-experimental design not control for?

threats to validity of the interaction effects of maturation and history, or the interaction effect of testing.

designs can be discussed in terms of what two things?

1. complexity

2. ability to answer research questions. Simple designs answer one question and more complex designs answer several research questions.

why are preexperimental designs weaker than true experimental designs in terms of control?

1. no random sampling of participants

2. are usually one group or two unequated groups

3. control few threats to validity

give an example of a preexperimental design. pretest/posttest design

2.the use of intact classes

if the treatment groups are unequal at the end of the study in a preexperimental design, it could be due to...

1. treatment were not equally effective

2. the groups were unequal at the start of the study

3. some combination of the two

A design must do what?

- control for major threats to validity.

- allow the researcher to answer the research question

- Follow KISS principle

- design must be adequate, but don't make the design any more complicated than it needs to be.

what is the basic procedure, from the design stand point, if there are two or more groups?

1. define the target population

2. ID the accessible population

3. randomly select participants

4. randomly assign participants to groups

is maximal internal or external validity obtained in most studies?

no, due to constraints on finances, time, participants, the research setting or other resources.

what is a must in order to have good external validity.

good internal validity

True of false: Sometimes minor threats and threats that are hard to eliminate may have to be left uncontrolled.


In an experimental design, what does a research want to control?

Control the effect of all variables except the experimental variable.

List the ways in which the control of variables can be obtained?

1. Physical manipulation

2. Selective manipulation

3. Matched pair design

4. Block design

5. Counterbalanced design

6. statistical techniques

define selective manipulation

method of gaining control by selectively manipulating certain participants or situations

define matched pairs design

a form of selective manipulation by which participants are matched to gain control

define block design

an extension of matched pairs design for three or more groups

define statistical techniques

method of gaining control if other control techniques are not possible

define analysis of covariance (ANOVA)

a statistical technique to gain control by adjusting for initial differences among groups

define covariate

score used to adjust for initial differences among groups in ANCOVA

define variate

the score adjusted in ANCOVA

what are forms of selective manipulation?

1. Matched Pairs

2. block design

3. counterbalance design

what is an example of a counterbalance design?

comparison of the effectiveness of two drugs.

when are statistical techniques employed in order to gain control?

when physical manipulation or selective manipulation of variables is not possible.

the researcher knows at the beginning of a study that the experimental groups differ in terms of one or more variables.

what is the basic purpose of the ANCOVA?

adjusts for the differences among the groups in scores at the end of the study based on differences in initial ability.

When using the ANCOVA technique, what is obtained at the beginning of the study?

- a covariate

- used to adjust differences among groups in terms of a score, called the variate, collected at the end of the study

list the common sources of error in research.

1. Hawthorne Effect

2. Placebo Effect

3. "John Henry" Effect

4. Rating Effect

5. Experimenter Bias Effect

6. Participant-Researcher Interaction Effect

7. Post Hoc Error

define hawthorne effect

participants in an experiment may perform in an atypical manner due to the newness or novelty of the treatment and because they realize that they are participating in an experiment.

define placebo effect

participants in an experimental treatment may believe the treatment is supposed to change them so they respond to the treatment with a change in performance.

define placebo

a treatment that can have no effect on any dependent variable of participants in a control group.

define "john henry" effect

in studies with an experimental group and a control group, the control group knows it is not supposed to be better than the experimental, so it tries harder and outperforms the experimental group

define rating effect

1. halo effect

2. central tendency error

define halo effect

the tendency to let initial impressions influence future ratings or scores of a participant

define central tendency error

a tendency to rate most participants in the middle of the rating scale.

what can the halo effect cause?

1. overrater error

2. underrater error

- researcher tends to overrate or underrate participants

define experimenter bias effect

the bias of a researcher can affect the outcome of study. The bias often favors the experimental treatment

define single-blind study

a study in which participants are unaware of the purpose of the study and their role in the study.

define double-blind study

a study in which participants and those conducting the study are unaware of the purposed of the study and group membership participants.

define participant-researcher interaction effect

- whether participants respond better to the same gender

- whether or not a certain setting may influence a participant

define post hoc error

caused by assuming a cause-and-effect relationship between two variables when such a relationship does not exist.

ex. most people die in bed than any other place; therefore, beds are dangerous.