ChatGPT (GPT-5.4, with no special prompting) scored roughly 70% to 75% on a 75-question SIE practice exam in our testing, which is right at the FINRA passing threshold. It performed best on conceptual questions about products and capital markets, and worst on specific regulatory thresholds and prohibited-activities scenarios. It would probably pass on a good day, fail on a bad one, and you should not trust its answers when you study.
Did ChatGPT actually pass our practice SIE?
Marginally. We ran a 75-question SIE practice exam (mirroring FINRAâs official content outline weights) through ChatGPT three times across different sessions to control for variance. The scores:
Average: 71.3%. The official SIE passing score is 70%. So ChatGPT passed twice and failed once.
Thatâs not a comfortable margin. A human candidate scoring 71% on practice exams would not be ready to schedule the real test. And the failure pattern reveals more than the average score does.
Where did ChatGPT fail?
The misses concentrated in three areas.
1. Specific regulatory thresholds. Questions like âWhat is the minimum maintenance margin requirement under Reg T?â or âWithin how many business days must a customer complaint be reported on Form U4?â ChatGPT often gave plausible-sounding but wrong numbers. In one run it confidently said maintenance margin was 50% (itâs 25% under FINRA, with Reg Tâs 50% being the initial requirement, a different thing).
2. Prohibited-activities scenarios. Questions where you read a fact pattern and identify whether itâs churning, free-riding, selling away, or a permitted activity. The model can recognize the textbook definitions but struggles with the borderline cases that the SIE actually tests.
3. Newer or less common rules. Anything added or revised in the last 3 to 4 years showed up as either outdated or hedged (âthis may have been updated, please verifyâ). Reg BI specifics, recent FINRA guidance on crypto-asset securities, and the May 2024 T+1 settlement transition were all weak spots.
Where it did well: capital markets fundamentals (issuers, dealers, market structure), product mechanics (how options work, how municipal bonds are taxed), and broad regulatory roles (SEC vs FINRA vs MSRB jurisdiction).
Why does it confidently get things wrong?
LLMs like ChatGPT generate text that looks like the right answer based on patterns in training data. They donât have access to a structured database of FINRA rules. When the model has seen the right answer many times in its training corpus (e.g., âwhat is a stock?â), it gets it right. When the right answer is buried in obscure regulatory text and the wrong answers also appear plausibly in financial writing, it can fail.
The dangerous part isnât that ChatGPT gets things wrong. Every study tool has an error rate. The dangerous part is that it gets things wrong with the same confident tone it uses when right. If youâre a beginner, you canât tell the difference.
Did ChatGPT show its work?
Yes, and thatâs where things got interesting. When asked to explain its reasoning, the model often produced clean, well-structured walkthroughs that referenced âFINRA Rule Xâ or âSEC Reg Y.â About 1 in 8 of those citations were either:
- The wrong rule number (real rule, but governs something different)
- A made-up rule number that sounds real
- A real rule cited for the wrong reason
These are textbook hallucinated citations. Theyâre harder to spot than wrong final answers because they look more authoritative.
If ChatGPT tells you âthis is governed by FINRA Rule 2090,â do not write that down without verifying it on FINRAâs official rule lookup. The model invents plausible-looking rule numbers with measurable frequency. Your study notes should never include a rule citation that came only from an LLM.
Did better prompting help?
A bit. We tried two prompt variants:
Variant A (control): Just paste the question, ask for the answer.
Variant B (chain-of-thought): âThink step by step. Identify the topic area, the relevant rule or concept, then evaluate each answer choice.â
Variant C (role + verify): âYou are a FINRA registered representative with 15 years of experience. Answer this SIE practice question. Before stating your final answer, verify the rule citation.â
Variant B improved the average to about 75%. Variant C improved to 76% but added a lot of latency. Neither got close to a comfortable pass margin (>80%).
The wins from prompting plateau quickly. The model is good at reasoning with the knowledge it has, but it canât conjure facts it doesnât reliably know.
Practice Against Real Questions, Not AI Approximations
4,000+ human-written, human-reviewed SIE practice questions. Every explanation cites real rules you can verify. Free, no credit card required.
Choose Your PathCould ChatGPT pass with the FINRA outline in context?
We tried this too. We pasted the relevant section of the official FINRA SIE content outline directly into the conversation and asked the same questions.
Score jumped to 84%. Thatâs the kind of margin a real candidate should aim for.
But: this only works if the question is on a topic covered in the outline excerpt you pasted. The full FINRA outline plus the relevant rules and definitions runs to hundreds of pages, far more than fits in a single ChatGPT context window comfortably. And feeding the full corpus in for every question is exactly what a purpose-built study tool does, except a study tool also has the questions, the explanations, the spaced-repetition, and the human review that ChatGPT doesnât.
In other words: when ChatGPT has access to the right reference material, it does well. So does any tool. The real question is whether itâs the right tool for daily SIE study, and the answer there is no.
What does this mean for your SIE prep?
A few practical takeaways.
Use ChatGPT as a tutor, not as a question writer or grader. Ask it to explain concepts youâre struggling with. âExplain the difference between Rule 144 and Rule 144A in plain Englishâ is a good prompt. âWrite me 20 SIE practice questionsâ is a bad one (the questions will look real but contain errors you canât catch).
Verify everything. Treat ChatGPT explanations the way youâd treat a Reddit comment: useful pointer, not authority. Cross-check rule numbers and specific thresholds against FINRA, SEC, or a curated study tool.
Donât grade your own practice with it. If you want to know whether your answer to a tricky question is right, the wrong place to ask is an LLM. The right place is an explanation written by someone who actually passed the exam and was reviewed by someone who teaches it.
Use it for vocabulary and intuition. ChatGPT is great at âWait, what is a Direct Participation Program in 30 seconds?â or âGive me an analogy for how municipal bond taxation works.â Thatâs the lane where it adds value without adding risk.
Will ChatGPT pass the SIE in two years?
Probably yes, with improving margins. Newer model generations are scoring better on standardized tests across the board. By the time GPT-6 or Claude Sonnet 5 is out, an LLM with a long-context window and access to FINRA rule text will likely score in the 90s.
But three things will still be true:
- You are taking the exam, not ChatGPT.
- The skill the SIE tests is fast recall of specific facts, not reasoning ability. Even if a future LLM can score 95%, that doesnât help you sit in a Pearson VUE testing center.
- Hallucinated citations are an architectural problem with autoregressive language models, not a model-quality problem. Even much smarter models will still confabulate.
The right way to think about LLMs in SIE prep is not âCan it pass?â but âWhere does it help me prepare?â The answer to the second question is more useful than the answer to the first.
The bottom line
ChatGPT can squeeze past the SIE passing threshold on a good day. Thatâs a fascinating data point and a terrible study strategy. The 30% it gets wrong includes exactly the kinds of regulatory specifics the exam loves to test, presented with the same confidence as the 70% it gets right. Use it as a concept tutor, not as a study companion. The hours youâd spend cross-checking its work are hours better spent on a curated question bank with verified explanations.