LSAT · Logical Reasoning54 flashcards

Survey and sampling flaws

54 flashcards covering Survey and sampling flaws for the LSAT Logical Reasoning section.

Survey and sampling flaws refer to common errors in how data is collected and analyzed in research. When conducting a survey, researchers select a group, or sample, to represent a larger population. Flaws arise if the sample is biased—for example, if it's not randomly chosen or if certain groups are overrepresented—leading to misleading conclusions. These issues matter because they undermine the reliability of evidence, which is crucial in fields like law where decisions often hinge on interpreting data accurately.

On the LSAT, Logical Reasoning questions frequently test survey and sampling flaws in arguments, appearing in flaw identification or weakening questions. You'll need to spot traps like assuming a non-representative sample reflects the whole population or ignoring sources of bias, such as self-selection in surveys. Focus on recognizing key concepts like selection bias, response rates, and improper generalizations to strengthen your ability to critique flawed reasoning effectively.

A concrete tip: Always ask if the sample truly mirrors the population in question.

Terms (54)

  1. 01

    Selection bias

    A flaw in sampling where the method of selecting participants systematically excludes certain groups, making the sample unrepresentative of the population.

  2. 02

    Voluntary response bias

    A sampling error that occurs when only people with strong opinions participate, often leading to results that overrepresent extreme views.

  3. 03

    Convenience sampling

    A non-random sampling method where participants are chosen based on ease of access, which can introduce bias by not reflecting the broader population.

  4. 04

    Undercoverage bias

    A flaw in surveys where some subgroups of the population are inadequately represented, skewing results away from the true population characteristics.

  5. 05

    Nonresponse bias

    Bias that arises when individuals who do not respond to a survey differ significantly from those who do, potentially distorting the findings.

  6. 06

    Response bias

    Error in survey results caused by respondents providing inaccurate answers, often due to leading questions, social desirability, or misunderstanding.

  7. 07

    Sampling error

    The difference between a sample statistic and the true population parameter, arising from the random variation in selecting a sample.

  8. 08

    Non-sampling error

    Errors in survey data not related to sample size, such as mistakes in data collection, processing, or respondent dishonesty.

  9. 09

    Overgeneralization from sample

    A logical flaw where conclusions about a entire population are drawn from a sample that is too small or unrepresentative.

  10. 10

    Small sample size

    A flaw where the sample is not large enough to reliably estimate population characteristics, increasing the margin of error.

  11. 11

    Biased wording in questions

    A survey design flaw where the phrasing of questions influences responses, leading to inaccurate or skewed data.

  12. 12

    Hawthorne effect

    A bias in surveys or experiments where participants alter their behavior because they know they are being observed, affecting the results.

  13. 13

    Self-selection bias

    A sampling issue where individuals choose to participate based on their characteristics, making the sample unrepresentative.

  14. 14

    Survivorship bias

    A flaw that occurs when only successful or surviving elements are considered in a sample, ignoring failures and leading to misleading conclusions.

  15. 15

    Anchoring bias in surveys

    A response bias where initial information influences subsequent answers, causing respondents to rely too heavily on that starting point.

  16. 16

    Confirmation bias in survey design

    A flaw where survey creators unconsciously design questions or interpret data to support their preexisting beliefs.

  17. 17

    Random sampling

    A method of selecting a sample where every member of the population has an equal chance, reducing bias but not eliminating sampling error.

  18. 18

    Stratified sampling

    A sampling technique that divides the population into subgroups and samples from each, aiming to ensure representation but can fail if strata are poorly defined.

  19. 19

    Margin of error

    A measure of the uncertainty in survey results, indicating how much the sample estimate might differ from the true population value.

  20. 20

    Ecological fallacy

    The error of assuming that relationships observed in group data apply to individuals within those groups, often seen in survey interpretations.

  21. 21

    Confounding variables in surveys

    Factors that correlate with both the independent and dependent variables, potentially distorting the perceived relationship in survey results.

  22. 22

    Leading question flaw

    A survey design error where questions are phrased to suggest a desired answer, biasing responses and invalidating conclusions.

  23. 23

    Double-barreled question

    A survey flaw where a single question addresses multiple issues, making it unclear which part respondents are answering.

  24. 24

    Order effect in questions

    Bias introduced when the sequence of questions influences how respondents answer, altering the overall results.

  25. 25

    Recall bias

    A response error where participants inaccurately remember or report past events, skewing survey data on historical behaviors.

  26. 26

    Social desirability bias

    A tendency for respondents to answer in ways that make them appear more favorable, leading to dishonest survey responses.

  27. 27

    Interviewer bias

    Error caused by the interviewer's influence, such as tone or appearance, which affects how respondents answer questions.

  28. 28

    Extrapolation beyond sample

    A flaw in reasoning where survey findings are applied to situations or times not covered by the original sample.

  29. 29

    Failure to use control groups

    A survey design omission where no baseline group is compared, making it impossible to isolate the effect of the variable in question.

  30. 30

    P-hacking

    The practice of manipulating data analysis to achieve statistically significant results, often by testing multiple hypotheses without adjustment.

  31. 31

    Causation vs. correlation flaw

    A common error in survey interpretation where a correlation between variables is mistakenly taken as evidence of causation.

  32. 32

    Placebo effect in surveys

    A bias where respondents experience changes due to their expectations rather than the actual survey subject, affecting outcomes.

  33. 33

    Sample representativeness

    The degree to which a sample mirrors the population's key characteristics; lack of it is a core sampling flaw.

  34. 34

    Non-random assignment flaw

    An error in survey or experimental design where participants are not randomly assigned, allowing bias to enter the results.

  35. 35

    Outlier influence

    A flaw where extreme values in a sample disproportionately affect the results, skewing interpretations.

  36. 36

    Cluster sampling

    A method that samples groups rather than individuals, which can introduce bias if clusters are not representative.

  37. 37

    Systematic error in measurement

    Consistent inaccuracies in how data is collected, such as faulty instruments, leading to biased survey results.

  38. 38

    Random error in surveys

    Unpredictable variations in data collection that can increase sampling error but are not consistently biased.

  39. 39

    Generalization from convenience sample

    The trap of applying findings from an easily accessed group to a larger population, often leading to flawed conclusions.

  40. 40

    Biased sample frame

    A flaw where the list used to select the sample does not accurately cover the target population, causing underrepresentation.

  41. 41

    Longitudinal survey flaw

    Issues in repeated surveys over time, such as attrition or changing participant characteristics, that distort trends.

  42. 42

    Cross-sectional survey limitation

    The inability of a single-point survey to capture changes over time, leading to incomplete or misleading insights.

  43. 43

    Strategy for spotting bias

    Examine the sampling method and question design to identify potential sources of distortion in survey results.

  44. 44

    Common trap: Ignoring margin of error

    Overlooking the range of possible error in survey estimates, which can lead to unwarranted confidence in the findings.

  45. 45

    Worked example: Voluntary response

    In a poll where only angry customers call in, the results might show widespread dissatisfaction, even if most customers are content.

    A radio station asks listeners to call with complaints, and 90% report issues, but this overrepresents dissatisfied listeners.

  46. 46

    Worked example: Selection bias

    Surveying only urban residents to gauge national opinions ignores rural views, leading to unrepresentative conclusions.

    A study on public transport satisfaction polls only city dwellers, concluding it's popular nationwide.

  47. 47

    Worked example: Nonresponse bias

    If healthier people respond to a health survey, results might underreport illness prevalence.

    In a mailed health questionnaire, only fit respondents reply, suggesting lower disease rates than actual.

  48. 48

    Worked example: Biased wording

    Asking, 'Don't you think taxes are too high?' may elicit more yes responses than a neutral question.

    A survey question phrased to favor one answer skews results toward that view.

  49. 49

    Advanced: Interaction effects

    In surveys, when two variables interact to influence responses in ways not accounted for, leading to misinterpretation of data.

  50. 50

    Advanced: Simpson's paradox

    A statistical phenomenon where trends appear in different groups but reverse when combined, revealing flaws in aggregated survey data.

  51. 51

    Advanced: Berkson's paradox

    A bias in conditional sampling where two independent factors appear correlated due to the selection process in surveys.

  52. 52

    Formula: Margin of error basic

    Approximately calculated as 1 divided by the square root of the sample size, indicating the precision of survey estimates.

  53. 53

    Trap: Confusing sample and population

    Assuming the sample's characteristics define the entire population without evidence, a frequent error in logical reasoning.

  54. 54

    Trap: Overreliance on self-reports

    Depending on respondents' subjective accounts without verification, which can introduce inaccuracies due to memory or honesty issues.