FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Kim Kardashian on ChatGPT: Her Self-Described ‘Frenemy’

Gregory Zuckerman
Last updated: November 7, 2025 6:18 pm
By Gregory Zuckerman
Technology
6 Min Read
SHARE

Kim Kardashian has a fraught relationship with ChatGPT, describing the AI chatbot as her “frenemy” after turning to it for help studying law, only to get errant advice.

In a wide-ranging interview with Vanity Fair, the entrepreneur and law apprentice explained employing the tool to verify answers and understand questions — before learning that confident-seeming responses can be totally wrong.

Table of Contents
  • A Candid Confession of a Celebrity Law Student
  • Hallucinations and the Law: Risks of Relying on AI
  • What the Data Tells Us About AI Reliability
  • Why Kardashian’s Comment About ChatGPT Resonates
  • Study and Work Safety Nets You Can Use Today
  • The Bottom Line on ChatGPT as a Study Companion
The ChatGPT logo, a white abstract knot-like design, centered above the word ChatGPT in black text, all set against a solid teal background.

A Candid Confession of a Celebrity Law Student

Kardashian, who is studying for a California law apprenticeship and passed the state’s First-Year Law Students’ Examination in the past, explained that she sometimes feeds it prompts and images of legal questions en masse in order to study its explanations.

The results, she said, have not always been beneficial. Detailed below, her explanation articulates a paradox the students’ personalization effort prompts some learners to note: that the tool is fast and convincing while its answers are misleading enough to throw off preparation for high-stakes assessments.

Her “frenemy” line also creates disposable garbage in a larger cultural moment. Celebs like those featured on the show have a way of making pop culture feel normal, but her confession reveals an important proviso — even polished-looking AI outputs need to be checked, especially in highly specific fields like law.

Hallucinations and the Law: Risks of Relying on AI

AI hallucinations — fluent-sounding but factually incorrect or made-up answers — are still a known vulnerability of large language models. The effect is not merely theoretical. Early last year in a closely watched case in New York, ChatGPT was used to find nonexistent cases that led, for example, to at least two lawyers being sanctioned after citing phony decisions included in a brief and reviewed by Judge P. Kevin Castel. The American Bar Association has repeatedly admonished lawyers to learn the limitations of AI tools and not relinquish human oversight, highlighting duties of competence and diligence.

The law is unusually unforgiving of creativity. A stray statute or phantom precedent can undermine an argument, and exam graders are all over invented citations. That makes unsupervised AI a risky study partner, no matter how convenient and omnipresent.

What the Data Tells Us About AI Reliability

Academic tests have demonstrated that advanced models can do impressively well on some multiple-choice and reasoning tasks, including bar-style questions.

But replications frequently uncover lopsided results once prompts are more open-ended, fact-intensive, or demand citation to sources. Independent audits by Stanford researchers and others have found continued hallucinations across a variety of domains, particularly when models are further pushed out of the well-trodden territory of training data they’re familiar with.

A black, stylized knot-like logo with six interwoven loops forming a central hexagon, set against a professional light gray background with subtle geometric patterns.

Even OpenAI warns those who use ChatGPT that it could serve up unreliable information. But retrieval-augmented systems “whenever they are drawing from a set of known documents” practically always deliver lower error rates, though not zero ones. And for learners, the distance between “sounds right” and “is right” can be precisely where grades — or professional credibility — slip away.

Why Kardashian’s Comment About ChatGPT Resonates

With Kardashian as its platform, her experience will get plenty of attention — but it reflects an overarching tension as AI assumes a role as a study buddy in classrooms, bootcamps, and professional training. Surveys from Pew Research Center indicate that the use of generative AI tools has risen, especially among young adults. With that growth comes an acceptance of turning to a chatbot for a fast recommendation — and, too frequently, taking the answer without there being any second opinion.

Her anecdote also illustrates the psychology of human-AI interaction. We as human beings naturally perceive intention and authority into conversational systems. When those systems come back with well-groomed conviction or comforting bromides, people can blur the line between style and substance. It’s an amplified version of the trap that exists in fields like law and medicine — domains with a high cost, or even legal liability, for making errors.

Study and Work Safety Nets You Can Use Today

“Frenemy” is a useful heuristic for thinking about Kardashian: Treat ChatGPT as a smart colleague who needs to be monitored.

  • Anchor answers in the base text.
  • Ask for citations (and check them).
  • Cross-reference with a trusted database or textbook.

In legal workflows, many firms now require written disclosures when AI is employed and mandate human review before anything leaves a draft folder.

AI can speed to comprehension — summarizing cases, creating issue spotters, recommending outlines — but it should not be the final arbiter. And as Kardashian’s experience shows, speed without precision is a liability. The smarter game is to leverage the tool’s strengths while retaining the final word resolutely human.

The Bottom Line on ChatGPT as a Study Companion

Kardashian’s confession feels like a timely reminder: generative AI algorithms can be delightful, capable, and catastrophically off-base — often in the same interview. “Frenemy” is a glib term for ChatGPT, but it hints at a growing consensus among educators, lawyers, and technologists. Use it, but verify it. And if there is a great deal to lose, trust expertise, not eloquence, to make the call.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Early Black Friday TV Deals from LG and Samsung up to 50%
Tiny Vinyl Mini LPs for Turntables: 4-Inch Records
Target Launches 40% Off Select LEGO Sets
Apple Seeds iOS 26.2, macOS 26.2, and iPadOS 26.2 Betas
NotebookLM Now Features Flashcards and Quizzes
Congressional Budget Office Affirms Hack
Starlink Hits 8M Users While SpaceX Scores Airline Wi-Fi Deal
WhatsApp Tests Messaging Across Business and Normal App
DJI Osmo 360 Adventure Combo now at an all-time low
Typhoon Survivor Discovers iPhone Is Working After The Floods
Tesla Pushes Back Roadster Unveiling to April Fools’ Day
Lenovo Legion Tab Gen 3 Receives $160 Discount
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.