GMAT · Quantitative50 flashcards

Conditional probability

50 flashcards covering Conditional probability for the GMAT Quantitative section.

Conditional probability is the likelihood of one event happening given that another event has already occurred. For example, if you flip two coins and want to know the probability that the second coin lands heads given that the first one did, this concept helps you calculate it by focusing on the relevant conditions. It's essentially about narrowing down possibilities based on prior information, making it a key tool for dealing with dependent events in probability.

On the GMAT Quantitative section, conditional probability often appears in problem-solving and data sufficiency questions, where you might analyze scenarios like medical diagnoses or card draws. Common traps include confusing it with independent events or overlooking conditional dependencies, which can lead to incorrect setups. Focus on mastering the formula—P(A|B) = P(A and B) / P(B)—and practicing word problems to identify when conditions matter most.

A concrete tip: Draw a probability tree to visualize event dependencies.

Terms (50)

  1. 01

    Conditional Probability

    This is the probability of an event occurring given that another event has already occurred, denoted as P(A|B), which measures how the first event's likelihood changes based on the second event.

  2. 02

    Formula for Conditional Probability

    The formula is P(A|B) equals the probability of both A and B occurring divided by the probability of B, provided P(B) is greater than zero, allowing calculation of dependent events.

  3. 03

    Independent Events

    Events are independent if the occurrence of one does not affect the probability of the other, so P(A|B) equals P(A), simplifying many probability problems on the test.

  4. 04

    Dependent Events

    Events are dependent if the outcome of one influences the probability of the other, meaning P(A|B) differs from P(A), which is common in sequential draws or selections.

  5. 05

    Multiplication Rule for Dependent Events

    To find the probability of both events happening, multiply the probability of the first event by the conditional probability of the second given the first, like P(A and B) equals P(A) times P(B|A).

  6. 06

    Bayes' Theorem

    This theorem updates the probability of an event based on new information, stating P(A|B) equals [P(B|A) times P(A)] divided by P(B), and is used for reverse conditional probabilities in complex scenarios.

  7. 07

    Prior Probability

    This is the initial probability of an event before considering new evidence, which serves as the starting point in Bayes' Theorem for updating to posterior probability.

  8. 08

    Posterior Probability

    This is the revised probability of an event after incorporating new evidence, calculated using Bayes' Theorem to reflect updated beliefs based on observed data.

  9. 09

    Law of Total Probability

    This principle calculates the overall probability of an event by considering all possible mutually exclusive scenarios that could lead to it, summing the conditional probabilities weighted by their probabilities.

  10. 10

    Sample Space Partition

    In conditional probability, this involves dividing the sample space into mutually exclusive events, allowing the use of the law of total probability to compute overall probabilities.

  11. 11

    Conditional Probability with Venn Diagrams

    Venn diagrams illustrate overlapping events to visualize conditional probability, showing how the intersection of sets affects the probability of one event given another.

  12. 12

    Tree Diagrams for Probability

    These diagrams map out sequential events and their probabilities, helping to calculate conditional probabilities by branching from one event to the next in dependent scenarios.

  13. 13

    Joint Probability

    This is the probability of two events occurring together, which is related to conditional probability since P(A and B) equals P(A|B) times P(B), but it's not the same as conditional.

  14. 14

    Marginal Probability

    This is the probability of an event regardless of other events, calculated by summing joint probabilities, and serves as a base for deriving conditional probabilities.

  15. 15

    Common Trap: Assuming Independence

    A frequent error is treating events as independent when they are not, leading to incorrect calculations of conditional probability in problems involving draws or selections.

  16. 16

    Strategy for Word Problems

    Break down word problems by identifying given events and conditions, then apply the conditional probability formula while carefully defining what is given and what is to be found.

  17. 17

    Reverse Conditional Probability

    This involves finding P(B|A) when P(A|B) is known, often requiring Bayes' Theorem to swap the conditions in probability calculations.

  18. 18

    Updating Probabilities

    In sequential events, probabilities are updated based on prior outcomes, using conditional probability to adjust for new information in ongoing scenarios.

  19. 19

    Independence Verification

    To verify if events are independent, check if P(A and B) equals P(A) times P(B); if not, conditional probability must be used for accurate results.

  20. 20

    Conditional Probability in Tables

    Probability tables organize data to compute conditional probabilities by dividing row or column probabilities, making it easier to handle multiple variables.

  21. 21

    Example: Drawing Cards

    In a deck, the probability of drawing a heart given that the card is red is 1/2, since all red cards are hearts or diamonds, illustrating a basic conditional scenario.

    From a standard deck, P(heart|red) = number of hearts divided by number of red cards.

  22. 22

    Example: Medical Testing

    The probability that a person has a disease given a positive test result is calculated using Bayes' Theorem, accounting for false positives in real-world applications.

    If a test is 90% accurate and the disease prevalence is 1%, P(disease|positive) is much less than 90%.

  23. 23

    False Positive Rate

    In conditional probability contexts like testing, this is the probability of a positive result given that the condition is absent, which affects the interpretation of test outcomes.

  24. 24

    False Negative Rate

    This is the probability of a negative result given that the condition is present, influencing how conditional probabilities are used in diagnostic scenarios.

  25. 25

    Sensitivity in Probability

    This measures the probability of a positive test result given that the condition is present, a key conditional probability in evaluating test effectiveness.

  26. 26

    Specificity in Probability

    This is the probability of a negative test result given that the condition is absent, another important conditional measure in medical or quality control problems.

  27. 27

    Chain Rule of Probability

    This extends conditional probability for multiple events, stating that P(A, B, C) equals P(A) times P(B|A) times P(C|A and B), for sequential dependencies.

  28. 28

    Mutually Exclusive Events and Conditioning

    Mutually exclusive events cannot occur together, so the conditional probability P(A|B) is zero if A and B are mutually exclusive and B has occurred.

  29. 29

    Exhaustive Events

    These are events that cover the entire sample space, used in conditional probability to ensure all possibilities are accounted for in calculations like the law of total probability.

  30. 30

    Probability of A Given Not B

    This is P(A|not B), calculated as the probability of A occurring when B has not, which is useful in scenarios excluding certain outcomes.

  31. 31

    Normalized Conditional Probability

    In some problems, conditional probabilities are normalized to sum to one within a subset, ensuring they represent a complete probability distribution.

  32. 32

    Common Trap: Zero Probability

    If P(B) is zero, conditional probability P(A|B) is undefined, so always check for impossible events before applying formulas.

  33. 33

    Strategy for Bayes' Problems

    Identify the prior probabilities and likelihoods first, then plug into Bayes' Theorem, and double-check for any conditional dependencies in the problem.

  34. 34

    Conditional Probability in Combinations

    When dealing with selections, conditional probability adjusts for items already chosen, like in permutations where order matters and events are dependent.

  35. 35

    Expected Value with Conditions

    This is the expected value of a random variable given a certain event, calculated by weighting outcomes with their conditional probabilities.

  36. 36

    Variance with Conditions

    Conditional variance measures the spread of a random variable given an event, using conditional probabilities to assess risk in dependent scenarios.

  37. 37

    Example: Coin Flips

    The probability of getting two heads in a row given that the first flip is heads is 1/2, showing how conditioning on the first outcome affects the second.

    In two flips, P(second heads | first heads) = 1/2.

  38. 38

    Example: Hiring Decisions

    The probability that a candidate is qualified given they passed the interview is found using conditional probability, factoring in interview accuracy.

    If 70% of qualified candidates pass and 20% of unqualified do, P(qualified|pass) depends on the applicant pool.

  39. 39

    Intersection of Events

    The probability of the intersection, P(A and B), is foundational for conditional probability, as it equals P(A|B) times P(B).

  40. 40

    Union of Events with Conditions

    For conditional unions, P(A or B | C) equals P(A|C) plus P(B|C) minus P(A and B|C), extending basic probability rules.

  41. 41

    De Morgan's Laws in Conditioning

    These laws help with complements in conditional probability, such as P(not A | B) equals 1 minus P(A|B), for handling negated events.

  42. 42

    Probability Density with Conditions

    In continuous distributions, conditional probability density is used for events in a range, though GMAT focuses more on discrete cases.

  43. 43

    Rare Event Assumption

    In some conditional problems, assuming events are rare simplifies calculations, like in Bayes' Theorem for low-probability occurrences.

  44. 44

    Sensitivity Analysis in Probability

    This involves testing how changes in conditional probabilities affect outcomes, a strategic approach for problem-solving on the test.

  45. 45

    Example: Urn Problems

    The probability of drawing a red ball given that the first was blue is calculated based on the urn's contents, illustrating dependent draws.

    From an urn with 3 red and 2 blue, P(red|first blue) adjusts for the removed ball.

  46. 46

    Bayesian Updating Example

    Starting with a prior, new evidence updates it to a posterior via conditional probability, as in tracking success rates over trials.

  47. 47

    Error in Overcounting

    A trap in conditional probability is overcounting outcomes, so always verify the sample space after conditioning on an event.

  48. 48

    Conditional Independence

    Events A and B are conditionally independent given C if P(A and B|C) equals P(A|C) times P(B|C), a nuanced concept in complex problems.

  49. 49

    Markov Chains Basics

    These involve sequences where future states depend only on the current state, using conditional probability for transitions in GMAT-level problems.

  50. 50

    Strategy for Multiple Conditions

    When dealing with more than two events, apply the chain rule iteratively to break down complex conditional probabilities.