Claude vs ChatGPT vs Gemini for SIE: Which AI Explains Best?

Quick Answer

All three top AI assistants (Claude, ChatGPT, Gemini) can explain SIE concepts at a useful level, but they differ. Claude tends to give the cleanest plain-English explanations and is most cautious about unverified rule citations. ChatGPT is the most confident and produces the most polished study material, but hallucinates rule numbers more often. Gemini is the fastest and pulls live information well, but its finance explanations are inconsistent. For daily concept tutoring, Claude was the most reliable in our testing.

How we tested

We gave each of the three AI assistants the same five SIE-relevant concept questions and graded the responses on four criteria.

The five concepts:

  1. Explain the difference between Rule 144 and Rule 144A in plain English.
  2. What does the QDIA designation mean for a 401(k) plan, and why was it created?
  3. Explain how municipal bond taxation differs from corporate bond taxation, including the de minimis rule.
  4. What is “selling away” and why is it prohibited?
  5. Walk through the trade settlement process from execution to clearing under T+1.

The four grading criteria:

  • Accuracy: Are the facts right?
  • Citation reliability: When the model cites a rule or regulation, is it correct?
  • Clarity: Would an SIE candidate with no finance background understand it?
  • Practical utility: Does it help you remember the concept on test day?

Each criterion scored 1 to 5. Maximum possible score: 20 per question, 100 across all five.

We tested the current free-tier default models as of spring 2026: Claude Sonnet 4.6 (claude.ai free tier), ChatGPT GPT-5.4 (chatgpt.com free tier), and Gemini 3 Flash (gemini.google.com free tier).

How did Claude do?

Total score: 86 / 100.

Strengths:

  • Plainest language. Claude consistently used analogies and avoided jargon-stacking. The Rule 144 / 144A explanation in particular distinguished the two using a straightforward “who’s selling, to whom” framing without burying the reader.
  • Citation caution. When asked for rule numbers, Claude often hedged (“This is governed by SEC Rule 144, though check the current SEC release for any updates”). When it did cite, the citations were accurate in 4 of 5 cases.
  • Balanced depth. Answers were long enough to be useful but stopped before bloating into textbook chapters.

Weaknesses:

  • Slightly slower than ChatGPT for the same prompt.
  • Occasionally added “you should consult a financial professional” disclaimers that aren’t useful when you’re studying.

Best response: the QDIA explanation. Claude framed it around “what happens when a 401(k) participant doesn’t pick investments” before getting into the technical definition, which is exactly how an SIE candidate would encounter it on the exam.

How did ChatGPT do?

Total score: 81 / 100.

Strengths:

  • Most polished prose. The output reads like a textbook chapter. If you wanted to copy-paste study notes, ChatGPT’s were the most ready-to-use.
  • Strong on big-concept questions. The municipal bond taxation walkthrough was excellent: it correctly covered federal exemption, in-state exemption rules, and the de minimis discount mechanic with a worked numerical example.
  • Confident structure. Headings, bullet points, and bolded terms came naturally without prompting.

Weaknesses:

  • Hallucinated citations. In the selling-away question, ChatGPT cited “FINRA Rule 3270” (real rule, but governs outside business activities, not selling away). The actual rule is FINRA Rule 3280. A candidate who copied that into their notes would have one fact wrong forever.
  • Overconfidence. ChatGPT did not hedge. When wrong, it was wrong with the same authoritative tone as when right.
  • T+1 walkthrough was slightly outdated. It referenced “the recent move to T+1” as if it were still novel, with one comment about T+2 transitions that read awkwardly post-2024.

Best response: the municipal bond taxation question. The de minimis explanation with a worked example was the clearest of the three.

đŸ”„

AI Tutoring + Verified Practice

Use AI to grasp concepts, then drill with 4,000+ SIE practice questions whose explanations cite real, verifiable rules. Free, no credit card required.

Choose Your Path

How did Gemini do?

Total score: 73 / 100.

Strengths:

  • Fastest response time. Noticeably quicker than the other two.
  • Live information access. When asked about T+1 settlement, Gemini correctly referenced the May 2024 SEC rollout and surrounding context that older training cutoffs would miss.
  • Decent on factual questions. Direct definitions (“what is Rule 144?”) came out roughly right.

Weaknesses:

  • Inconsistent depth. Some answers were full and well-organized; others were surprisingly thin. The selling-away question got a 2-paragraph answer that mentioned “FINRA rules” but never specified which.
  • Weaker analogies. Where Claude said “imagine you’re an employee with stock options you can’t sell yet,” Gemini stayed in textbook register.
  • Format drift. Sometimes used markdown headers, sometimes didn’t, sometimes used emoji bullets. Made copy-pasting into study notes annoying.

Best response: the T+1 walkthrough. The live-information advantage showed here.

Side-by-side comparison

CriterionClaudeChatGPTGemini
Accuracy4.4 / 54.0 / 53.6 / 5
Citation reliability4.6 / 53.8 / 53.8 / 5
Clarity for beginners4.6 / 54.4 / 53.8 / 5
Practical utility4.2 / 54.0 / 53.6 / 5
Total (out of 20)17.816.214.8

Which AI should I use as a tutor?

For most SIE candidates, Claude is the best default. Cleaner explanations, fewer hallucinated citations, more cautious tone when it doesn’t know something.

But there’s a real argument for using two models:

  • Claude for “explain this concept to me like I have no finance background.”
  • ChatGPT for “summarize this into study notes I can paste into my outline.”

The first task rewards Claude’s plainer language; the second rewards ChatGPT’s more polished structure.

Gemini is the weakest of the three for SIE tutoring at this point, despite being free and fast. The inconsistency hurts more than the speed helps.

Are paid tiers worth it?

Marginally, for a 4-to-6-week SIE exam prep window. The paid tiers (Claude Pro, ChatGPT Plus, Gemini Advanced) offer:

  • Larger context windows (helpful if you want to paste in long study materials)
  • Faster responses
  • Access to more capable models in some cases (though the free tiers usually get the flagship within 6 to 12 months)

For a one-shot SIE prep, a $20 monthly subscription for one of these is reasonable if you’ll use it daily for serious tutoring. It’s overkill if you’re just spot-checking concepts.

What you should not use AI for in SIE prep

We’ve covered this elsewhere, but to consolidate:

Do not:

  • Generate practice questions (why)
  • Trust rule citations without verification
  • Use AI as your primary study tool
  • Ask AI to grade your practice answers

Do:

  • Ask for plain-English explanations of concepts you find confusing
  • Get analogies and worked examples for math-heavy topics (taxation, options pricing)
  • Use it to summarize long readings into study notes
  • Quiz yourself by asking AI to explain why an answer is right after you’ve checked it against a verified source

How does AI tutoring fit with everything else?

For a 5-week SIE study plan, AI tutoring should be maybe 10 to 15% of your time. The rest goes to:

If you find yourself spending more than that on AI conversations, you’re probably substituting “feeling productive” for actual exam-prep work. The exam tests fast recall under pressure, and AI conversations don’t build that skill.

The bottom line

Claude wins for SIE concept tutoring on the strength of clearer explanations and better citation discipline. ChatGPT is a close second and a better fit if you want polished study notes. Gemini is fast but inconsistent. Use whichever you prefer, but use them as a tutor, not a study system. Real preparation comes from real practice questions, real flashcards, and real full-length exams. AI is supplementary, not foundational.