4) Technical and professional questions (what separates prepared candidates)
Technical interviews for a Full-Stack Developer in the US often feel like controlled whiplash: a React question, then a database index question, then “how would you deploy this?” That’s intentional. They’re testing whether you can keep a coherent mental model across layers.
Below are the questions that come up again and again—especially in companies hiring a Full Stack Engineer to ship product, not just maintain code.
Q: How would you design authentication and authorization for a multi-tenant SaaS app?
Why they ask it: They’re testing security fundamentals and whether you understand tenant isolation.
Answer framework: Threats–Design–Controls (name risks, propose architecture, add guardrails).
Example answer: For auth I’d use OIDC with a trusted IdP and short-lived access tokens plus refresh tokens stored securely. For authorization I’d model tenant membership and roles explicitly, then enforce tenant scoping at the data access layer—not just in controllers. I’d add row-level checks (or separate schemas) depending on scale and risk, and I’d log authorization failures for detection. I’d also plan for least privilege service-to-service access and rotate secrets.
Common mistake: Talking only about JWTs and skipping tenant isolation and enforcement points.
Q: In React, how do you prevent unnecessary re-renders in a complex page without making the code unreadable?
Why they ask it: They’re testing performance tuning with maintainability.
Answer framework: Measure–Target–Refactor.
Example answer: I start by measuring with the React Profiler to find which components re-render and why. Then I target the biggest offenders—often context value churn, unstable props, or expensive derived data. I’ll use memoization selectively (memo/useMemo/useCallback) and split state so updates don’t ripple across the whole tree. If it’s data-heavy, I’ll consider virtualization and server-driven pagination.
Common mistake: Blanket “wrap everything in memo” answers that create complexity without proof.
Q: Explain how you’d model and query a feed (posts + comments + likes) in PostgreSQL. Where do indexes matter?
Why they ask it: They’re testing data modeling plus real-world query performance.
Answer framework: Entities–Access patterns–Indexes.
Example answer: I’d model posts, comments, and likes as separate tables with foreign keys and timestamps, and I’d be explicit about the feed query patterns: newest posts, user-specific filters, and counts. I’d index on (created_at) for ordering, and composite indexes like (post_id, created_at) for comments. For counts, I’d consider cached counters with background reconciliation if write volume is high. I’d validate with EXPLAIN ANALYZE and production-like data.
Common mistake: Designing the schema without stating the queries you’re optimizing for.
Q: What’s your approach to API versioning and backward compatibility?
Why they ask it: They’re testing whether you can evolve systems without breaking clients.
Answer framework: Contract–Change types–Migration plan.
Example answer: I treat the API as a contract and prefer additive changes: new fields, new endpoints, or feature flags. For breaking changes, I’ll version at the endpoint or header level and publish a deprecation window with telemetry to see who’s still using old versions. I also like consumer-driven contract tests so changes are caught before deploy. The goal is boring upgrades.
Common mistake: “We’ll just bump v2” without a migration and observability plan.
Q: How do you handle secrets and configuration in AWS for a production web app?
Why they ask it: They’re testing operational maturity and cloud basics common in US job posts.
Answer framework: Principles–Services–Process.
Example answer: I keep secrets out of code and CI logs, and I separate config by environment. In AWS I’d use Secrets Manager or SSM Parameter Store with IAM roles for access, and I’d rotate credentials where possible. For deployments, I’d inject config at runtime and restrict who can read secrets via least privilege policies. I’d also audit access and avoid long-lived keys on developer machines.
Common mistake: Storing secrets in environment files committed “by accident” or relying on shared static keys.
Q: Describe your CI/CD pipeline for a full-stack app. What do you run where? (GitHub Actions, Docker, etc.)
Why they ask it: They’re testing whether you can ship safely and repeatedly.
Answer framework: Pipeline stages (lint/test/build → security → deploy → verify).
Example answer: In GitHub Actions I run linting, unit tests, and build steps for both frontend and backend, plus type checks. I build Docker images with pinned base versions, scan them, and push to a registry. Deployments go to staging first with smoke tests and a short canary when possible, then production with rollback baked in. After deploy, I verify via health checks and dashboards, not vibes.
Common mistake: Treating CI/CD as “we run tests” and skipping deployment safety and rollback.
Q: What would you do if your primary database becomes unavailable during peak traffic?
Why they ask it: They’re testing incident response and resilience thinking.
Answer framework: Stabilize–Diagnose–Recover–Prevent.
Example answer: First I’d stabilize: enable read-only mode or degrade non-critical features to reduce writes, and communicate status. Then I’d diagnose using monitoring—connection saturation, failover status, storage, or a runaway query. If we have replicas, I’d fail over according to the runbook; if not, I’d prioritize restoring service and protecting data integrity. After recovery, I’d add safeguards like connection pooling limits, query timeouts, and tested failover drills.
Common mistake: Jumping to “restart the DB” without considering data integrity, failover, or user impact.
Q: How do you secure a REST API against common web threats (OWASP Top 10)?
Why they ask it: They’re testing practical security, not just coding.
Answer framework: Threat–Mitigation mapping.
Example answer: I start with input validation and output encoding, plus strong auth and authorization checks. I add rate limiting and abuse detection, protect against CSRF where relevant, and use secure headers and CORS policies intentionally. I also ensure secrets aren’t exposed, dependencies are scanned, and logs don’t leak PII. Finally, I like to run periodic security reviews aligned with OWASP Top 10.
Common mistake: Saying “we use HTTPS” as if that covers injection, auth flaws, and access control.
Q: In the US, what compliance or privacy considerations affect how you store user data?
Why they ask it: They’re testing whether you understand US market realities around privacy and regulated data.
Answer framework: Identify–Classify–Control.
Example answer: I first classify the data: is it general personal data, payment data, or health data? For payments, I’d align with PCI DSS expectations and avoid storing raw card data by using a payment provider. For health data, HIPAA can apply and changes everything—access controls, audit logs, BAAs, and encryption. Even outside those, state privacy laws like CCPA/CPRA influence retention, deletion workflows, and data access requests.
Common mistake: Treating “compliance” as a legal team problem and ignoring engineering controls.
Q: How would you implement observability for a Node.js/Java/Spring or Python backend plus a React frontend?
Why they ask it: They’re testing whether you can operate what you build.
Answer framework: Signals (logs, metrics, traces) + golden paths.
Example answer: On the backend I’d standardize structured logs with request IDs, add RED metrics (rate, errors, duration), and instrument traces for key endpoints. On the frontend I’d capture performance and error events with user/session context, while respecting privacy. I’d create dashboards for the top user journeys—login, checkout, search—and set alerts tied to SLOs. The point is to find issues before customers do.
Common mistake: “We’ll add logging” without specifying what you measure and how you connect frontend to backend.