Gone is the 500-character limit in customized chat instructions, which Google has loosened to allow up to 10,000 characters with NotebookLM. Confirmed by the team behind NotebookLM on X, the update also seems to unlock far better control over not just tone and persona but also rules — finally allowing an improved reasoner to grow beyond a cramped prompt box.
The change is big for researchers, students, and publishers who use NotebookLM to consume long PDFs or create Audio Overviews. “We allow a 20x jump, and that means you can now provide guiding information, such as a style guide or checklist or domain-specific definitions and exceptions, without shaving off too much context,” the company wrote in an update.
Why This Limit Matters for NotebookLM Users
Large language models are very sensitive to their “system” or steering directives. At 500 characters, users must trade off tone, constraints, examples, or role, which often led to generic or inconsistent responses. At 10,000 characters — about 1,500 to 2,000 words — this provides space for a durable brief: the aims, do-and-don’t rules, definitions of voice and audience, formatting specifications, and fallback behaviors when sources are ambiguous.
Longer prompts are technically meant to minimize prompt drift and rework. Instead of repeatedly prompting the model to reference particular sources or employ a specific editorial style throughout, users can encode that expectation one time along with example outputs. Since LLMs are token-based as opposed to character-based (with about four characters on average per token), the expanded field is bound to be a few thousand tokens of stable guidance — enough to significantly enhance steerability without consuming the full context window.
What’s Different in the App After the Limit Increase
The enhancement will be made to the Configure notebook panel within the chat interface, where you assign the AI its role, motive, and style. That instruction layer is applied on top of your inputs — the PDFs, docs, and notes you upload — so the model can reason over a conversation more cohesively. Google has also made recent strides in handling context and maintaining coherence over the course of a session, and the significantly larger instruction space removes this last major bottleneck between those search-engine upgrades and real-world use cases.
Early Impact and Competitive Context for NotebookLM
It allows for richer personas and playbooks, practically speaking. A newsroom could build a persistent “section editor” that powers through with house style, encourages citing claims to their passages, and flags weak sourcing. In a research lab, this might include establishing literature review procedures and citation formats, as well as developing a glossary to prevent misunderstanding. Teachers can encode rubrics so that the assistant is comparing student drafts against specific criteria.
For the wider market, custom instruction length has turned into a divergence. The already-existing counterparts of this kind — GPT-based builders and enterprise assistants from major AI vendors — already have a few thousand characters for system prompts. At 10,000 characters, it becomes more competitive as a grounded research companion — especially as it ties those instructions directly to your own curated sources instead of the open web by default.
How to Make Good Use of the Extra Instruction Space
Think of the guidance as a living style guide. Begin with a pithy objective, follow that up with a list of rules, and close with 2–3 short examples of exemplary responses. Keep directions explicit:
- Always include page numbers
- Summarize and then analyze
- Use a neutral tone
- Flag if evidence is lacking
Use structured elements that the model can grab onto, such as bullet checklists, sections with headings (to-do items like Goal/Audience/Tone/Sources/Format and acceptance criteria). Glossary: Define important terminology and acronyms in order to minimize confusion. If you are using Audio Overview, state depth and pacing for the summary (“30 second overview,” “methodology then results”). Finally, pay attention to boundaries: what you don’t do is as potent as what you do.
Anticipate real gains: reduced follow-up prompts, less drifting over long threads, and responses that reflect your preferences and meet people in your domain. It doesn’t always replace common sense, but a well-delineated design brief generally dictates tighter work.
What Has Not Changed Despite the Higher Limit
Upload limits and daily usage still apply. We’re increasing this limit, not changing others. This isn’t opening up higher rate limits or raising the number or size of files you can add to a notebook. It also doesn’t remove the need to check claims against your sources. As with all AI-generated research, be sure to maintain human oversight when citing and validating data or providing nuanced interpretation.
The Takeaway: Why This Update Matters for Users
By increasing the allowed custom instruction data from 500 to 10,000 characters, Google has fixed the most obvious friction point with NotebookLM. The result is a more manageable assistant that can parrot your voice, adhere to your rules, and keep its feet on the ground by using only your materials. It is a simple change with an outsized impact — and it makes NotebookLM much more applicable for long-form projects and collaborative research.