In my last article, you learned the basics of how to do your First Security Architecture Review, including – understanding the environment, defining scope, and identifying key security outcomes. Now it’s time to go one step deeper.
In this article, we’ll focus on how to perform a full Security Review: a detailed examination of design, data flows, integrations, and security controls to ensure that the architecture’s security assumptions actually hold up in implementation. Rather than keeping it theoretical, we’ll walk through the process using a real-time (hypothetical) example, showing how a Security Engineer would analyze a new feature, identify risks, and guide the team toward secure design decisions.
I’ve modeled this example to closely resemble a real-world scenario. While it’s not an exact copy of any specific system, the architecture, risks, and workflows are highly representative of what you would encounter in actual product environments.

Let’s say your development team already has a fully functioning, customer-facing website. Production is live. Traffic is flowing. Customers are logging in every day and using the product without issues.
Then the product team proposes:
“We want to add a new Chatbot feature to enhance customer support. It should collect basic details and even open support cases automatically.”
Sounds harmless, right?
A chatbot seems like a simple widget — something you embed with a snippet of JavaScript.
It doesn’t appear to affect core functionality.
It looks small enough to skip formal review.
But here’s the golden rule:
Just because something works doesn’t mean it’s safe.
A chatbot introduces:
- New data flows
- New external integrations
- New backend services
- New logic paths
- New attack surfaces
Unless it goes through a security review, you have no idea what risks you just introduced into your environment.
This article walks you through how a Security Engineer should perform a real-world security review — using the Chatbot integration as a live example.
1. Establish the Scope — Your Map Before the Mission
Before you search for threats, you must define what exactly you are reviewing.
Scope is the foundation. Without it, every review becomes guesswork.
In-Scope for the Chatbot Feature
- The new chatbot widget embedded in the website
- Chatbot backend API (serverless function, microservice, or vendor platform)
- Integration with customer ticketing systems (e.g., Jira, Zendesk, FreshcDesk)
- Data flow: Website → Chatbot → Ticketing System
- Any data collected by the bot (PII, account IDs, issue details)
- Authentication flow between user ↔ chatbot ↔ backend
- Secrets shared among these systems
Out-of-Scope
- Existing website features unrelated to the Chatbot
- Admin dashboards (unless Chatbot interacts with them)
- Legacy services untouched by this change
Outcome Definition
A successful review delivers:
- A risk assessment
- A threat model
- Architecture validation
- Recommendations mapped to your organization’s security guardrails
Setting the scope doesn’t mean you’ve analyzed anything yet — you’ve simply defined the battlefield.
Without scope, you’re reviewing blindly.
With scope, you’re reviewing intelligently.
2. Understand the Architecture — What Are We Dealing With?
Once scope is defined, you need to understand how the system actually works.
You don’t need perfect DFDs on day one — simple diagrams in Lucidchart, Miro, or Draw.io are enough. Over time, you’ll naturally get better at drawing formal DFDs.
Typical Chatbot Architecture
- User opens website → chatbot widget loads (JavaScript snippet)
- User sends a message → Request goes to Chatbot Backend API
- Chatbot backend may:
- Retrieve user context (customer ID, session token)
- Call an LLM/NLP engine (internal or external vendor)
- Create support tickets
- Response flows back to the chatbot widget
Questions to Ask the Product/Dev Team
- What data does the chatbot collect?
- Does it use an in-house model or a third-party LLM API?
- Is the bot read-only, or can it trigger actions (like ticket creation)?
- How does it authenticate users?
- How does it authenticate to external services?
- Where does data get stored?
- What is logged and where do logs go?
This stage equips you with the context you need for real threat modeling.
3. Threat Modeling — Where Can Things Go Wrong?
After understanding the architecture, the next step is brainstorming potential threats.
This can take time — you’ll research technologies, integrations, and attack patterns.
Most orgs use STRIDE, which covers:
- Spoofing
- Tampering
- Repudiation
- Information Disclosure
- Denial of Service
- Elevation of Privileges
A quick example:
If the chatbot relies on session cookies or tokens:
- An attacker may spoof tokens
- Users could be impersonated
- Fake support cases could flood your system
- Tokens might leak from the front-end
Threat modeling guides you toward the right questions and ensures you’re not missing blind spots.
4. Map Your Findings to Expected Outcomes
Now apply the outcomes from your security guardrails and core principles such as:
- Least Privilege
- Defense in Depth
- Secure by Default
- Encrypt Everywhere
- Fail Securely
Below is how to evaluate the chatbot feature against these principles.
A. Authentication & Authorization
Questions:
- Does the chatbot work only for authenticated users?
- How does the chatbot know the identity of the user?
- Is a secure token passed to the backend?
- Are permissions enforced server-side?
Expected:
- JWT with short TTL
- Server-side validation
- No sensitive data returned to unauthenticated sessions
B. Data Classification & Protection
Questions:
- Does chatbot collect PII?
- Are messages encrypted in transit?
- Is PII masked before being sent to LLM vendors?
Expected:
- HTTPS everywhere
- No raw PII sent to external vendors
- Proper logging with sanitization
C. Secret Management
Expected:
- Secrets stored in AWS/GCP Secret Managers or Vault
- No secrets in JS, HTML, or frontend bundles
- Automatic rotation policies
D. Dependency & Third-Party Risk
Example Checks:
- Vendor LLM data retention
- SOC2/GDPR compliance
- API security guarantees
- Vulnerabilities in third-party scripts
E. Infrastructure Security
Questions:
- Is traffic going through CDN/WAF?
- Does the API need to be public?
- Is rate limiting applied?
Expected:
- WAF protection
- API throttling
- Bot/abuse protection
F. Logging & Monitoring
Critical logs include:
- Ticket creation events
- Authentication failures
- Chatbot backend errors
- Suspicious input patterns
Logs should flow to SIEM, with masking applied to PII.
G. Security Guardrails Alignment
Examples include:
- Standard IAM policies
- LLM security guidelines
- Input validation requirements
- Vendor risk assessment completion
- Secure CI/CD requirements
5. Findings & Recommendations
After mapping outcomes, document the actual findings and recommend fixes.
Prioritize by severity:
- Critical/High → must fix before release
- Medium → next sprint
- Low → backlog
If severity isn’t defined, perform a Risk Assessment using metrics such as:
Risk Assessment Metrics
Impact
- Data sensitivity
- Business impact
- Legal/regulatory exposure
- Integrity/availability impact
Likelihood
- Exposure level
- Exploitability
- Privilege required
- Attack patterns
- Existing controls
Asset Value
- Importance of the affected system
Final Severity
- Critical / High / Medium / Low
These allow you to justify why a finding must be fixed and how urgently.
Final Thoughts — Your Job Isn’t to Block, But to Guide
A Security Architecture Review isn’t about saying no.
It’s about ensuring what goes live can survive the real world.
Document everything.
Refine your process.
No review is perfect — but every review makes it harder for attackers to exploit your system.
By going through:
- Scope
- Architecture review
- Threat modeling
- Mapping to outcomes
- Findings & recommendations
—you transform scattered information into clear, actionable security guidance.
The chatbot example demonstrates how any “simple” feature can introduce serious risks — and how a structured security review brings those risks under control.