Kim Kardashian says AI fouled her up while trying to become a lawyer. In a video interview published this week by Vanity Fair, the reality TV star and legal apprentice described turning to ChatGPT to double-check her answers while studying, only to find that it was “so confident in being wrong—it shocked me but also comforted me.” The end result, she said, was missed questions and failed practice tests.
What She Said About ChatGPT And Why It Matters
Kardashian recounted a common scenario: take a picture of a challenging question, slap it in the chatbot, rinse and repeat. The responses, she said, often sounded assured but were incorrect, and the system would occasionally “talk back,” telling her to go with her instincts after giving a wrong answer. She described the experience as “insane,” casting it as a cautionary tale about outsourcing judgment to a machine that is programmed to sound like it knows what it’s talking about even when it doesn’t.
The remarks fall at the nexus of culture. Celebrity or not, Kardashian is part of an increasingly diverse cohort treating large language models as study partners. When those tools get it wrong in high-stakes situations — most especially in law, where false or mistaken statements can have an impact — the divide between certainty and verity becomes newsworthy.
AI Study Aids And Their Real-World Limitations
OpenAI and other developers have hyped chatbots for studying, and with good reason: The models can summarize dense text, generate outlines, and simulate quizzes. But they also hallucinate — generating believable-sounding lies. One such research group is from Stanford — the Center for Research on Foundation Models — which published in October a paper by eleven researchers documenting “sycophancy,” where models reproduce a user’s assumptions rather than contradict them. That characteristic flatters the learner — but can also solidify mistakes.
Legal education adds another wrinkle. Legal issues often turn on narrowly defined jurisdictional rules, exceptions, and citations. A system that mixes doctrines from various states or invents cases whose resemblance to reality is distracting will mislead the conscientious student. Even analogy-drawing models that accurately describe general principles may falter on fact patterns when they must engage in the deft spotting and application of relevant issues under time pressure.
Bar Exam Reality Check For California Aspirants
Rather than pursue a traditional J.D., Kardashian followed California’s apprenticeship path, “reading the law.” She had already passed the state’s First-Year Law Students’ Examination, often called the baby bar, which has historically had pass rates ranging from 20% to 30%, according to data provided by the State Bar of California. The full California bar exam is still among the most challenging in the country, with passage rates that typically fall between the low 30s and around 50%, depending on the sitting.
Those numbers highlight a fundamental point: well-prepared candidates get questions wrong, too. AI would be a scapegoat in an infamously brutal process. But Kardashian’s ordeal shows how relying too heavily on a chatty chatbot can multiply the problem — particularly when students are bypassing textbooks and authoritative outlines for fast information.
What Law Educators Recommend For Using AI
Law professors and bar tutors are more commonly advising a “trust but verify” stance.
- Utilize AI to generate practice hypos, outline notes, or explain a concept you have previously learned.
- Check every legal claim against casebooks, commercial outlines, or primary authority such as statutes and controlling case law.
- For essay practice, feed model answers to the AI only after you have developed one yourself, using it not as a rule-statement substitute but as an instrument that will help reveal issues you’ve failed to spot.
- Insist on a pinpoint citation whenever a model invokes a rule and cross-reference it in an official reporter or annotated code; odd or missing citations are red flags.
- Set up “AI-free” timed sections on the multiple-choice front to see if pattern recognition and pacing are improving without crutches.
What AI Companies Say About Study And Accuracy
AI developers acknowledge the limits. The organization cautions that its models can be inaccurate and that they should not be considered professional legal advice. At the same time, lab benchmarks boom with startling standardized test scores — GPT-4 has reportedly mastered answers for Uniform Bar Exam simulations — but those scores are based on controlled testing and might not translate to a particular state exam or to the lawyerly “apply the rule to these facts” problems that tend to dominate law exams.
The paradox is evident: AI appears most brazen when it’s summarizing, most cowardly when mind-numbing specificity and jurisdictional nuance matter most. For bar candidates at least, the tech can lead to faster learning — so long as it is coupled with human judgment and careful verification.
Bottom Line For Students Using ChatGPT To Study Law
The Kardashian gripe lands because it grasps a larger shift in study habits. ChatGPT can play the part of a coach, or a co-pilot, even a cheerleader. None of those roles ensures accuracy. Until models consistently track and cite-check the law, what researchers are chasing turns out to be something far in law’s past, not its future — past generations of lawyers rose or fell on reading closely, checking sources, and accepting ultimate responsibility for getting the final answer right.