4) Technical and professional questions (what separates prepared candidates)
Technical interviews for JavaScript roles in the US often look less like a university exam and more like a day at work: reading code, debugging, designing a component boundary, and explaining tradeoffs. You’ll still get fundamentals, but the best interviewers want to hear how you think under constraints.
Q: Explain the JavaScript event loop in practical terms. When do microtasks vs macrotasks matter?
Why they ask it: They’re checking whether you can reason about async bugs and UI jank.
Answer framework: “Definition → example → consequence.” Keep it grounded in real bugs.
Example answer: “The event loop is how JS schedules work on a single thread. Promises queue microtasks, which run before the next macrotask like a setTimeout callback. That matters when you chain promises and accidentally starve rendering, or when you expect a setTimeout to run before a promise resolution. In UI code, I’m careful with long microtask chains and I’ll yield back to the browser when needed.”
Common mistake: Reciting theory without connecting it to debugging or performance.
Q: In React, how do you prevent unnecessary re-renders without turning the code into memoization soup?
Why they ask it: They want performance instincts plus maintainability.
Answer framework: “Measure → isolate → optimize.” Only optimize what you can observe.
Example answer: “I start with React DevTools Profiler to find what’s actually re-rendering. Then I isolate: split components, move derived values into useMemo only when expensive, and stabilize callbacks with useCallback when they’re passed deep. If state is causing broad re-renders, I’ll restructure state ownership or use context selectors. The goal is readable code with targeted optimizations, not blanket memoization everywhere.”
Common mistake: Saying “useMemo/useCallback everywhere” as a default.
Q: You’re interviewing as a Node.js Developer too—how do you handle errors in async code (Promises/async-await) in a production API?
Why they ask it: They’re testing whether you can build reliable services, not just happy paths.
Answer framework: “Layers” framework: input validation, business logic, infrastructure, observability.
Example answer: “I validate inputs early, then use structured error types for business rules versus unexpected failures. In Express or Fastify, I centralize error handling middleware so async errors don’t get swallowed. I log with correlation IDs, return consistent error responses, and make sure unhandled rejections crash the process in a controlled way so we don’t run in a bad state. Then I add tests for failure modes, not just success.”
Common mistake: Catching everything and returning 200 with an ‘error’ field.
Q: What’s your approach to TypeScript in a JavaScript codebase—where do you draw the line between safety and speed?
Why they ask it: They want a pragmatic TypeScript Developer mindset, not dogma.
Answer framework: “Risk-based typing.” Tight types at boundaries, flexible types inside.
Example answer: “I’m strict at boundaries: API responses, form inputs, and shared libraries. That’s where bugs are expensive. Inside a feature, I’ll avoid over-modeling if it slows delivery; I’ll use narrower types as the code stabilizes. I also prefer letting TypeScript infer types instead of writing verbose annotations everywhere. The win is fewer runtime surprises without turning development into a type puzzle.”
Common mistake: Bragging about ‘100% strictness’ without acknowledging delivery tradeoffs.
Tooling questions show up constantly in US interviews because teams want predictable delivery.
Q: Describe your testing strategy for a React Developer role. What do you unit test vs integration test vs E2E?
Why they ask it: They’re checking whether you can prevent regressions efficiently.
Answer framework: Testing pyramid with examples tied to user behavior.
Example answer: “I unit test pure functions and tricky edge cases. For components, I prefer integration tests with React Testing Library that exercise user behavior—typing, clicking, and seeing UI changes—without mocking everything. E2E tests are for critical flows like login and checkout, kept small because they’re slower and flakier. I also add contract tests around API boundaries when backend changes frequently.”
Common mistake: Testing implementation details (like internal state) instead of user-visible behavior.
Q: How do you manage dependencies and security issues in a JavaScript monorepo?
Why they ask it: They want to know you can keep a large codebase healthy.
Answer framework: “Policy + automation + exceptions.”
Example answer: “I keep dependencies centralized with a lockfile and consistent package manager settings. I automate audits in CI and use Dependabot or Renovate for controlled updates. For high-risk packages, I check maintenance signals and prefer smaller, well-maintained libs. When we must accept a risk temporarily, I document it, pin versions, and set a deadline to remove or replace.”
Common mistake: Treating npm audit as a complete security strategy.
Here’s a question that experienced frontend engineers recognize instantly.
Q: How do you handle browser compatibility and polyfills in 2026?
Why they ask it: They’re testing whether you can ship to real users, not just modern devices.
Answer framework: “Target → detect → fallback.”
Example answer: “I start with explicit browser targets using browserslist and real analytics if available. I prefer progressive enhancement and feature detection over user-agent sniffing. For missing APIs, I use targeted polyfills rather than shipping huge bundles to everyone. And I test the riskiest flows on Safari and mobile early, not the day before release.”
Common mistake: Assuming ‘evergreen browsers’ means you can ignore Safari or older mobile WebViews.
US companies also care about accessibility because it’s both user impact and legal risk.
Q: What accessibility standard do you build toward, and what do you actually do in code to meet it?
Why they ask it: They want practical accessibility habits and awareness of US compliance risk.
Answer framework: “Standard + practices + verification.”
Example answer: “I aim for WCAG 2.1 AA as a baseline, which is commonly referenced in US accessibility expectations. In code, I use semantic HTML first, ensure keyboard navigation, manage focus for dialogs, and label inputs properly. I verify with automated checks like axe plus manual keyboard and screen reader spot checks. If we’re building for regulated industries, I ask early what compliance requirements apply.”
Common mistake: Saying “we use an accessibility plugin” and stopping there.
Now the ‘systems fail’ question—very common in US loops.
Q: What do you do if your CI pipeline is down but you need to ship a hotfix?
Why they ask it: They’re testing judgment under pressure and respect for safeguards.
Answer framework: Risk triage: scope, verification, rollback plan.
Example answer: “First I reduce scope: smallest possible change behind a feature flag if available. If CI is down, I run the critical test suite locally and get a second engineer to review quickly. I document what checks were skipped and why, and I prepare a rollback plan before deploying. After the incident, I prioritize restoring CI and adding redundancy so hotfixes don’t depend on a single point of failure.”
Common mistake: Pushing directly to production because ‘it’s urgent’ without compensating controls.
Two more “insider” questions that show you’ve lived in real codebases.
Q: How do you debug a memory leak in a single-page app?
Why they ask it: They want real-world debugging skill, not just coding ability.
Answer framework: “Reproduce → measure → isolate → fix → prevent.”
Example answer: “I reproduce the leak with a consistent user path, then use Chrome DevTools Memory to take heap snapshots and compare retained objects. I look for common culprits like event listeners not removed, timers, subscriptions, or caches that grow unbounded. Once I isolate the component or module, I fix cleanup in effects and verify the heap stabilizes. Then I add a regression test or monitoring so we catch it earlier next time.”
Common mistake: Restarting the app and calling it ‘fixed.’
Q: How do you design an API contract between frontend and backend to avoid breaking changes?
Why they ask it: They’re testing whether you can collaborate across teams and reduce churn.
Answer framework: Contract-first thinking: versioning, compatibility, validation.
Example answer: “I like explicit contracts—OpenAPI when possible—and I push for backward-compatible changes by default. On the frontend, I validate and narrow unknown data at the boundary, especially if the backend is evolving. If we need a breaking change, I prefer additive fields plus deprecation windows and telemetry to see when old clients are gone. The goal is fewer ‘surprise’ deploys that break the UI.”
Common mistake: Assuming TypeScript types alone guarantee runtime compatibility.