0:00
/
Preview

AI Bootcamp (Lesson 1): How LLMs Work, and How to Defend Against Hallucinations

A four-week joint experiment for non-technical investors. Today: Two summaries of a real filing, plus your personal “always verify” checklist.

If you are viewing this online and would like to participate in the AI Bootcamp, opt in to the bootcamp mailing list (you can opt out at any time). Participation is open to MOI members and paid Latticework subscribers.

A note before we begin: This is the first lesson in a four-week experiment. I am doing every lesson alongside you, on the same tools, with the same constraints. Some days will land cleanly. Some will lead to dead ends and need rework. We’ll figure out what works, together.

Let’s launch into our first lesson.

We start with one of the most important insights in the entire bootcamp: learning when an AI model is being fluent versus when it is being right.

Every limit or hallucination in your future idea engine traces back to how large language models (LLMs) read and write text. If we understand tokens and context windows, we avoid the most expensive mistakes. If we do not, confident-sounding fabrications about earnings, ratios, quoted language, or filing dates can quietly corrupt an investment thesis.

This post is for paid subscribers