Technical and professional questions (the ones that decide the offer)
This is where US interviews separate “has read about Web3” from “can ship safely.” Expect follow-ups. If you give a shallow answer, they’ll drill until you hit bedrock.
Q: Walk me through how you prevent reentrancy in Solidity and when a guard is not enough.
Why they ask it: They’re testing whether you understand the mechanics of external calls and state changes.
Answer framework: Rule–Example–Edge case (state the rule, show a pattern, then name exceptions).
Example answer: My default is checks-effects-interactions: validate inputs, update state, then do external calls. If I must call out early, I isolate the call and minimize state exposure. I’ll use a reentrancy guard, but I don’t treat it as magic—cross-function reentrancy and callback-based tokens can still create weird paths. I also design for pull payments where possible and avoid relying on transfer gas assumptions.
Common mistake: Saying “I use ReentrancyGuard” and stopping there.
Q: Explain the difference between call, delegatecall, and staticcall, and how it affects upgradeable proxies.
Why they ask it: Upgrade patterns are common in US startups; misuse is catastrophic.
Answer framework: Compare–Consequence–Control.
Example answer: call executes in the callee’s context and storage; delegatecall executes the callee’s code in the caller’s storage context; staticcall enforces no state changes. Proxies rely on delegatecall so the implementation logic writes to proxy storage, which means storage layout and initialization are critical. I control risk with explicit storage gaps, initializer guards, and upgrade authorization behind a multisig + timelock. I also avoid upgradeability unless the product truly needs it.
Common mistake: Forgetting storage layout collisions and initializer vulnerabilities.
Q: You’re interviewing for a Solidity Developer-style role. How do you design and test invariants for a DeFi contract?
Why they ask it: They want to see if you can think in properties, not just unit tests.
Answer framework: Invariant ladder (1) accounting invariants, (2) permission invariants, (3) economic invariants.
Example answer: I start with accounting: total shares map to total assets within rounding bounds, and no path can mint value without input. Then permissions: only specific roles can pause/upgrade, and role changes are logged and delayed. Then economics: simulate adversarial sequences—deposit/withdraw loops, price manipulation windows, and MEV-style ordering. I use fuzzing/property tests (Foundry) and run them against forked mainnet state to catch integration weirdness.
Common mistake: Only writing happy-path unit tests with fixed inputs.
Q: How do you handle chain reorganizations in an indexer or backend service?
Why they ask it: Real DApp Developer work breaks when you assume finality too early.
Answer framework: Design for reorgs (finality threshold + reversible state + reconciliation).
Example answer: I treat events as tentative until a confirmation threshold based on the chain’s reorg risk. The indexer stores block hashes and can roll back derived state for the last N blocks. If a reorg happens, we replay from the common ancestor and reconcile balances/positions. For user-facing UX, we label pending states clearly and avoid irreversible off-chain actions until finality.
Common mistake: Assuming “once I saw the event, it’s final.”
Q: What’s your approach to gas optimization without making the contract unreadable or unsafe?
Why they ask it: They want pragmatic optimization, not micro-optimizations that introduce bugs.
Answer framework: Measure–Optimize–Re-verify.
Example answer: I optimize only after measuring with a profiler like Foundry gas reports. I target high-frequency paths first—loops, storage writes, and redundant checks—while keeping readability with clear comments and helper functions. If an optimization changes control flow, I re-run fuzz tests and compare traces to ensure behavior didn’t drift. I’ll also consider architectural changes like batching or off-chain computation when it’s safe.
Common mistake: Premature optimization or “assembly for everything.”
Q: Describe your toolchain for smart contract development and auditing in 2026.
Why they ask it: They’re checking whether you can operate like a professional, not a hobbyist.
Answer framework: Pipeline story (dev → test → analyze → deploy → monitor).
Example answer: I typically use Foundry for testing and scripting, plus Slither for static analysis and Echidna-style fuzzing when appropriate. For dependency hygiene, I pin versions and run CI with reproducible builds. Deployments go through staged environments and verified contracts, with a clear upgrade and pause policy. Post-deploy, I set up event monitoring and alerts for abnormal parameter changes or large value movements.
Common mistake: Naming tools without explaining how they fit into a repeatable pipeline.
Q: How would you design a wallet connection and signing flow to reduce phishing risk?
Why they ask it: US companies care about consumer protection and support burden.
Answer framework: Threat model → UX controls → technical controls.
Example answer: I start by minimizing signature prompts and using EIP-712 typed data so users can read what they sign. I avoid blind personal_sign for anything sensitive and include domain separators and chain IDs to prevent replay. On the frontend, I show human-readable summaries and warnings for approvals, especially unlimited allowances. On the backend, I log signature intent and detect anomalous patterns like repeated approvals to unknown spenders.
Common mistake: Treating signing as “just Metamask integration.”
Q: What US regulations or compliance constraints show up in blockchain engineering work?
Why they ask it: Even developers get pulled into decisions shaped by regulation.
Answer framework: Scope–Impact–Boundary (what you know, how it affects design, when you involve experts).
Example answer: I’m not a lawyer, but in the US you regularly see constraints tied to AML/KYC expectations, sanctions screening, and custody/consumer protection requirements depending on the product. That affects architecture: where identity checks live, how you handle blocked addresses, and what you log for audits. If a feature touches custody or looks like it could trigger securities questions, I push for early review with compliance and document assumptions. Engineering’s job is to make enforcement technically possible without creating backdoors.
Common mistake: Either pretending regulation doesn’t matter, or giving legal advice with confidence.
Q: What would you do if your RPC provider starts returning inconsistent data during a critical launch?
Why they ask it: They’re testing incident response and resilience—very US “ops-minded.”
Answer framework: Stabilize–Verify–Fail over–Postmortem.
Example answer: First I’d pause any automated actions that could submit transactions based on bad reads. Then I’d verify against multiple providers and, if possible, a self-hosted node to identify whether it’s provider-specific or chain-wide. I’d fail over reads/writes to a secondary provider and increase confirmation thresholds temporarily. After stabilizing, I’d write a postmortem, add provider health checks, and implement quorum reads for critical state.
Common mistake: Continuing to push transactions while “hoping it fixes itself.”
Q: How do you evaluate whether to use an L2, a sidechain, or mainnet for a new feature?
Why they ask it: They want you to connect product requirements to security and cost.
Answer framework: Requirements matrix (security, cost, latency, ecosystem, exit risk).
Example answer: I map the feature’s risk profile: if it holds meaningful value or needs maximum neutrality, mainnet wins despite cost. If we need cheap interactions and can accept different trust assumptions, an L2 is often the sweet spot, but I evaluate withdrawal/exit risk and sequencer dependencies. For sidechains, I’m explicit about validator trust and bridge risk. Then I propose a phased rollout: start with low caps on cheaper execution, expand as monitoring proves stability.
Common mistake: Choosing a chain because it’s trendy or because fees are low today.