Kim Kardashian has a fraught relationship with ChatGPT, describing the AI chatbot as her “frenemy” after turning to it for help studying law, only to get errant advice.
In a wide-ranging interview with Vanity Fair, the entrepreneur and law apprentice explained employing the tool to verify answers and understand questions — before learning that confident-seeming responses can be totally wrong.
A Candid Confession of a Celebrity Law Student
Kardashian, who is studying for a California law apprenticeship and passed the state’s First-Year Law Students’ Examination in the past, explained that she sometimes feeds it prompts and images of legal questions en masse in order to study its explanations.
The results, she said, have not always been beneficial. Detailed below, her explanation articulates a paradox the students’ personalization effort prompts some learners to note: that the tool is fast and convincing while its answers are misleading enough to throw off preparation for high-stakes assessments.
Her “frenemy” line also creates disposable garbage in a larger cultural moment. Celebs like those featured on the show have a way of making pop culture feel normal, but her confession reveals an important proviso — even polished-looking AI outputs need to be checked, especially in highly specific fields like law.
Hallucinations and the Law: Risks of Relying on AI
AI hallucinations — fluent-sounding but factually incorrect or made-up answers — are still a known vulnerability of large language models. The effect is not merely theoretical. Early last year in a closely watched case in New York, ChatGPT was used to find nonexistent cases that led, for example, to at least two lawyers being sanctioned after citing phony decisions included in a brief and reviewed by Judge P. Kevin Castel. The American Bar Association has repeatedly admonished lawyers to learn the limitations of AI tools and not relinquish human oversight, highlighting duties of competence and diligence.
The law is unusually unforgiving of creativity. A stray statute or phantom precedent can undermine an argument, and exam graders are all over invented citations. That makes unsupervised AI a risky study partner, no matter how convenient and omnipresent.
What the Data Tells Us About AI Reliability
Academic tests have demonstrated that advanced models can do impressively well on some multiple-choice and reasoning tasks, including bar-style questions.
But replications frequently uncover lopsided results once prompts are more open-ended, fact-intensive, or demand citation to sources. Independent audits by Stanford researchers and others have found continued hallucinations across a variety of domains, particularly when models are further pushed out of the well-trodden territory of training data they’re familiar with.
Even OpenAI warns those who use ChatGPT that it could serve up unreliable information. But retrieval-augmented systems “whenever they are drawing from a set of known documents” practically always deliver lower error rates, though not zero ones. And for learners, the distance between “sounds right” and “is right” can be precisely where grades — or professional credibility — slip away.
Why Kardashian’s Comment About ChatGPT Resonates
With Kardashian as its platform, her experience will get plenty of attention — but it reflects an overarching tension as AI assumes a role as a study buddy in classrooms, bootcamps, and professional training. Surveys from Pew Research Center indicate that the use of generative AI tools has risen, especially among young adults. With that growth comes an acceptance of turning to a chatbot for a fast recommendation — and, too frequently, taking the answer without there being any second opinion.
Her anecdote also illustrates the psychology of human-AI interaction. We as human beings naturally perceive intention and authority into conversational systems. When those systems come back with well-groomed conviction or comforting bromides, people can blur the line between style and substance. It’s an amplified version of the trap that exists in fields like law and medicine — domains with a high cost, or even legal liability, for making errors.
Study and Work Safety Nets You Can Use Today
“Frenemy” is a useful heuristic for thinking about Kardashian: Treat ChatGPT as a smart colleague who needs to be monitored.
- Anchor answers in the base text.
- Ask for citations (and check them).
- Cross-reference with a trusted database or textbook.
In legal workflows, many firms now require written disclosures when AI is employed and mandate human review before anything leaves a draft folder.
AI can speed to comprehension — summarizing cases, creating issue spotters, recommending outlines — but it should not be the final arbiter. And as Kardashian’s experience shows, speed without precision is a liability. The smarter game is to leverage the tool’s strengths while retaining the final word resolutely human.
The Bottom Line on ChatGPT as a Study Companion
Kardashian’s confession feels like a timely reminder: generative AI algorithms can be delightful, capable, and catastrophically off-base — often in the same interview. “Frenemy” is a glib term for ChatGPT, but it hints at a growing consensus among educators, lawyers, and technologists. Use it, but verify it. And if there is a great deal to lose, trust expertise, not eloquence, to make the call.