4) Technical and professional questions (the ones that decide it)
This is where US interviews for C# Developer roles get blunt. You’ll be asked to explain tradeoffs, not recite definitions. Expect follow-ups like “what breaks if we do it your way?” and “how would you test that?”
You’ll also see stack-specific prompts. Even if the job title is C# Developer, postings often narrow into ASP.NET Developer work, or a C# .NET Core Developer backend, or a team integrating with services written by a Java Developer, Python Developer, or Node.js Developer group. Be ready to speak that language.
Q: Explain the difference between Task, async/await, and threads in C#. When can async code still deadlock?
Why they ask it: They’re testing real-world concurrency understanding, not syntax.
Answer framework: Concept → Example → Failure mode → Mitigation.
Example answer: A Task represents an asynchronous operation; it may run on a thread pool thread, but it doesn’t have to. async/await is a way to compose Tasks without blocking, while threads are the underlying execution units. Deadlocks can still happen if you block on async work—like calling .Result or .Wait—especially in contexts with a synchronization context. My default is “async all the way,” avoid blocking calls, and use ConfigureAwait appropriately in library code.
Common mistake: Saying “async means multithreading,” which is not reliably true.
Q: In ASP.NET Core, how do you design a versioned API without breaking clients?
Why they ask it: They want to see if you can evolve APIs safely in production.
Answer framework: Contract-first: define compatibility rules → choose versioning strategy → deprecation plan.
Example answer: I start by defining what’s breaking: removing fields, changing semantics, or tightening validation. Then I pick a strategy—URL versioning or header-based—based on client constraints, and I keep old versions running with clear deprecation timelines. I add contract tests and monitor usage by version so we can retire versions based on data. The goal is predictable change, not surprise outages.
Common mistake: “We’ll just update the clients,” ignoring external consumers and long-tail integrations.
Q: How do you prevent and handle null reference issues in modern C#?
Why they ask it: They’re checking whether you use the language features that reduce production bugs.
Answer framework: Prevent → Detect → Enforce.
Example answer: I enable nullable reference types and treat warnings seriously, especially at boundaries like controllers and message handlers. I use guard clauses for external inputs and prefer explicit types over “maybe-null” flows. In reviews, I look for null-forgiving operators that hide real risk. Over time, nullable annotations plus tests reduce the “random NRE in prod” class of incidents.
Common mistake: Relying on try/catch around everything instead of fixing the contract.
Q: Compare Entity Framework Core vs. Dapper for a high-throughput service. When would you choose each?
Why they ask it: They’re testing pragmatic data-access decisions.
Answer framework: Workload → Constraints → Tradeoff → Decision.
Example answer: EF Core is great when you want strong modeling, migrations, and productivity, and your query patterns are predictable. For high-throughput or very query-tuned paths, Dapper can be a better fit because it’s closer to SQL and has less overhead. I’ve used EF Core for most CRUD and Dapper for a couple of hot endpoints where we needed tight control over joins and projections. The key is consistency and not creating a “two ORM” mess without a clear boundary.
Common mistake: Treating EF Core as “slow” by default without profiling.
Q: What’s your approach to authentication and authorization in an ASP.NET Core API—especially with OAuth2/OIDC?
Why they ask it: They want to know you can build secure services the US market expects.
Answer framework: Threat model → Standards → Implementation details → Verification.
Example answer: I prefer OIDC for authentication and OAuth2 scopes/claims for authorization, typically integrating with an identity provider rather than rolling our own. In ASP.NET Core, I configure JWT validation carefully—issuer, audience, clock skew—and I use policy-based authorization so rules are testable. I also think about token lifetime, refresh flows for SPAs, and service-to-service auth using managed identities where possible. Then I verify with integration tests and security reviews.
Common mistake: Confusing authentication (“who are you?”) with authorization (“what can you do?”).
Q: How would you design idempotency for a payment or order API endpoint?
Why they ask it: This is a “real backend” question—retries happen, and duplicates are expensive.
Answer framework: Failure modes → Idempotency key → Storage/TTL → Concurrency.
Example answer: I’d require an idempotency key per client request and store the key with the request hash and the resulting order/payment ID. On retries, we return the original result instead of creating a duplicate. I’d enforce uniqueness at the database level and handle race conditions with transactions or upserts. I’d also set a TTL policy and log idempotency hits to detect client retry storms.
Common mistake: Saying “we’ll just check if the order exists” without a reliable key and concurrency protection.
Q: What would you do if your primary database becomes unavailable during peak traffic?
How to structure your answer:
- Stabilize: reduce blast radius (circuit breakers, read-only mode, queue writes).
- Communicate: declare incident, set expectations, assign roles.
- Recover: failover/restore, validate data integrity, then add prevention work.
Example: I’d trip a circuit breaker to stop hammering the DB, degrade non-critical features, and route writes into a queue if the business can tolerate eventual consistency. If we have replicas, I’d switch reads to a replica and execute a planned failover if supported. After recovery, I’d run reconciliation checks and add alerts on connection pool exhaustion and replication lag.
Q: How do you structure logging and tracing in a .NET microservices environment?
Why they ask it: They’re testing whether you can make systems observable.
Answer framework: Signals: logs + metrics + traces → correlation → actionable dashboards.
Example answer: I use structured logs with consistent fields—traceId, spanId, userId, route, dependency—and I avoid logging sensitive data. I emit metrics for latency, error rate, and saturation, and I use distributed tracing (often OpenTelemetry) to follow a request across services. The goal is that on-call can answer “what broke, where, and why” in minutes, not hours.
Common mistake: Logging everything as plain text and hoping grep will save you.
Q: What’s your testing strategy for a C# .NET Core Developer role—unit, integration, and contract tests?
Why they ask it: They want to see if you can prevent regressions in a CI/CD pipeline.
Answer framework: Test pyramid + “boundaries first” (focus on APIs, DB, and external dependencies).
Example answer: I keep unit tests fast for business logic and edge cases, then add integration tests around the real database using containers when possible. For service-to-service calls, I like contract tests so teams don’t break each other silently. In CI, I run the fast suite on every PR and the heavier suite nightly or on main merges. The goal is confidence without turning tests into a tax.
Common mistake: Only writing unit tests and ignoring the database and HTTP boundaries where most bugs live.
Q: In the US, many teams handle sensitive data. How do you think about compliance like SOC 2 or HIPAA in backend design?
Why they ask it: They need engineers who won’t accidentally create compliance risk.
Answer framework: Data classification → Controls → Evidence.
Example answer: First I clarify what data is regulated—PHI for HIPAA, or customer data under SOC 2 controls. Then I design around least privilege, encryption in transit and at rest, audit logging, and retention policies. I also think about operational controls: access reviews, secrets management, and incident response. Compliance isn’t just code—it’s proving the controls exist and are followed.
Common mistake: Treating compliance as “the security team’s job,” not an engineering responsibility.