Back to Home
From Mechanical Engineer to Distributed Systems Expert: Building an AI-Resistant Career Through Deep Tech

From Mechanical Engineer to Distributed Systems Expert: Building an AI-Resistant Career Through Deep Tech

Discover how Ankit Sood transformed from mechanical engineer to distributed systems architect. Learn about reactive programming, stream processing, and building AI-resistant technical skills that matter in 2025.

July 18, 2025
12 min read
By Rachit Magon

While 77% of software engineers worry about AI replacing their jobs, there's a different breed of technologist emerging - one that's positioning themselves in the infrastructure layer that powers AI itself.

We sat down with Ankit Sood, a distributed systems engineer whose career trajectory defies the typical "learn to code" narrative. Starting as a mechanical engineer 11 years ago, he didn't just switch to software - he went straight to the deep end, mastering reactive programming, actor models, and enterprise-level data architectures that form the backbone of today's AI applications.

His story isn't just about career transition. It's about strategic positioning in technology that's genuinely AI-resistant.

Key Takeaways: Your Deep Tech Roadmap

Immediate Actions (Next 30 Days):

Medium-term Strategy (3-12 Months):

Long-term Vision (1-3 Years):


Q: You started as a mechanical engineer and now you're deep in distributed systems. Take us back to that transition moment - what made you realize software was the path?

Ankit: The shift is so complete today that it's still surprising when somebody mentions I'm a mechanical engineer. It's been 11 years, and honestly, it doesn't feel like the elephant in the room anymore.

When I joined my undergrad in mechanical, my favorite subjects in school had been physics and computer science. I got a slightly better college for mechanical, so I enrolled thinking mechanical engineering was basically physics applied to the real world. But things were very different.

A lot of mechanical engineering revolves around production and manufacturing, with very little actual physics involved. That was one disappointment. But the bigger factor was accessibility - in software, all you need is a laptop. If you want to try some new algorithm, tweak something, or solve a simple bug, you just need that laptop.

In mechanical, nobody's going to let you touch a machine. You can't just say "I want to see how this lathe works" and go at it with a spanner and screwdriver. The barrier to experimentation was much higher.

🔥 ChaiNet's Hot Take: This accessibility advantage Ankit mentions is becoming even more pronounced in the AI era. While AI can generate application code, building and understanding distributed systems requires deep experimental knowledge that comes from hands-on experience - something you can't get from prompting ChatGPT.

Q: Now that we're in the AI era, do you think your mechanical engineering background gives you an edge, or would you have been more AI-resistant staying in mechanical?

Ankit: That's interesting because mechanization and automation in mechanical is pretty old - it happened decades earlier than what we're seeing in software today. Back when I was in mechanical 15 years ago, there weren't many R&D opportunities.

But with the current push like Make in India and automation available in both fields, I think if somebody is really into a field and willing to put effort - not looking for instant results or to become a millionaire overnight - there are sufficient opportunities in both.

The key is understanding things conceptually and being willing to put time and effort into it. Both fields are equally challenging, and both have paths forward in the AI world.

🔥 ChaiNet's Hot Take: The manufacturing sector is experiencing its own AI revolution. According to McKinsey, AI-driven predictive maintenance reduces machine downtime by 30-50%. But Ankit's point stands - success isn't about the field you choose, it's about the depth of understanding you develop.

Q: Let's talk about reactive programming. Most developers work with request-response patterns. What exactly is reactive programming and why should people care?

Ankit: Reactive programming is about building systems that respond or react to events. Traditional web services follow request-response patterns, or batch jobs where you give input data and wait for output.

But reactive systems just respond to messages. They don't have to follow HTTP protocol - they just react to events. Think about event-driven systems like data pipelines, or self-scalable systems like elastic storage and compute from cloud providers. When they see load exploding, they automatically bring in more compute and storage.

The actor model that Akka is based on uses message passing. It's similar to how other reactive systems work - actors communicate through messages instead of direct method calls.

🔥 ChaiNet's Hot Take: This is where AI applications are heading. ChatGPT's streaming responses, recommendation engines processing real-time user behavior, autonomous agents coordinating tasks - they're all built on reactive architectures. Netflix processes over 1 billion events per second using reactive systems.

Q: Actor models and reactive systems are technologies from the 1970s. Why are they Sooddenly relevant again in the AI era?

Ankit: You're right - the first academic paper on actor models was published in early 1970s. This is how things work in tech. Academic topics come first, then as infrastructure needs grow and we have sufficient compute, industry adopts them.

It wasn't until mid-'90s that Erlang adopted the actor model. Same thing happened with AI - neural networks were conceptualized in the 1930s with groundwork done in the 1970s. We just didn't have enough compute to implement these mathematical models on real-world data.

For agentic AI specifically, if you take a high-level view of what an AI agent system is, there's an LLM plus dozens of processes talking to each other, coordinating, distributing tasks, recovering from failures. This is exactly what actor models were designed for - interprocess communication, failover, restart capabilities, stateful processes coordinating with each other.

🔥 ChaiNet's Hot Take: The timing is perfect. OpenAI's infrastructure requires coordination between thousands of GPU nodes. Companies building AI agents need systems that can handle failure gracefully and scale dynamically. These are exactly the problems actor models solved decades ago.

Q: Stream processing seems to be everywhere now - from ChatGPT's typing effect to real-time recommendations. Is it actually hard to implement, and where do teams typically mess it up?

Ankit: Stream processing shares similarities with batch processing - we're essentially reducing batch sizes from petabytes to data worth seconds or minutes. But there's one critical difference: you cannot see the same data twice.

In batch systems, if something fails, you restart and process the same data again. With streams, if you're processing tweets on a topic and the system fails, you can't afford to see the same tweet twice. Whatever processing you need must happen on the first pass.

The hardest part is stateful stream processing with multiple parallel streams coordinating with shared memory and state. Imagine building a news dashboard aggregating from 10 different sources while removing duplicates. You need parallel intersecting streams with shared state - something batch processing wasn't designed for.

🔥 ChaiNet's Hot Take: Stream processing failures are expensive. Uber processes over 100 billion events daily, and even minor failures cascade into millions in lost revenue. Yet 65% of stream processing projects fail due to underestimating state management complexity.

Q: Should developers learn these advanced concepts like stream processing and actor models, or stick with web development and frameworks they know?

Ankit: The tech world is an ocean - nobody can consume everything. There are things in web development I don't know, and things in distributed systems that web developers don't know.

But if you want to explore, stream processing and reactive systems are very promising, growing fields. What's unique is their scope is expanding, but it's not that other things are losing relevance. ETL, ELT, smart batch processing - they all have their use cases and continue evolving.

If somebody wants to go into this direction, put your heart and soul into it. These are advanced skills that separate good engineers from the rest of the pack.

🔥 ChaiNet's Hot Take: The salary data supports this. According to Stack Overflow's 2024 survey, engineers with distributed systems skills command 40-60% higher salaries than traditional web developers. These aren't just "nice to have" skills anymore.

Q: For someone wanting to break into agentic AI and distributed systems, which companies should they target?

Ankit: Don't apply to companies - apply to teams and products. All major companies are trying to find relevance in the AI race, but within each company, only a handful of teams are actually building agentic AI systems.

When agentic AI reaches sufficient competency, most companies will buy licenses and adapt rather than build their own. So if you want to build these systems, target the specific teams doing this work.

Follow companies on LinkedIn, identify which business units are working on AI agents, then target those teams directly in your applications. Don't just apply for a company - apply for a team building what you want to learn.

🔥 ChaiNet's Hot Take: This advice is spot-on. Microsoft has over 1,000 AI-related openings, but 80% are within just 12 specific teams. Google's DeepMind represents less than 2% of Google's workforce but drives 40% of their AI innovations.

Q: How brutal is the current tech job market? Do we even have options anymore to be strategic about teams and roles?

Ankit: It's not as brutal as LinkedIn makes it seem. Companies that are laying off are also hiring - it's not that hiring is frozen. What's happening is people who've been in roles for long periods without showing enthusiasm to adopt new things are being asked to leave.

Market capitalization of companies is growing, they're picking up more projects, trying to get ahead in the AI race. For that, they need backend engineers, data architects, web developers - everybody.

The mindset has to change though. There's no such thing as getting into a job and relaxing for the rest of your lives. You have to stay on the front foot and remain relevant. If you're engaged with your work and constantly growing, you won't have time to complain about market conditions.

🔥 ChaiNet's Hot Take: The data supports this optimism. According to the U.S. Bureau of Labor Statistics, tech job openings in Q2 2024 were 35% higher than 2019. The key differentiator is skill relevance - professionals with AI-adjacent infrastructure skills see 3x more interview requests.

Q: Companies are expecting 2-3x development speed with tools like Copilot. How realistic is this for complex distributed systems?

Ankit: These claims are mostly coming from top leadership who don't do hands-on coding. AI tools have definitely reduced time I spend on Stack Overflow, which is meaningful, but they can't replace human engineers for complex systems.

AI might be good at writing standalone code - a few hundred or thousand lines. But when it comes to understanding context of entire software systems, how changes in one service affect downstream flow, I don't think any agentic AI system is capable of that level of systems thinking.

Debugging remains one of the harder skills for both humans and AI. People tend to overestimate AI tools, similar to the dot-com bubble when any business with a website was considered valuable.

🔥 ChaiNet's Hot Take: Recent studies validate this skepticism. GitHub's productivity report found Copilot increases code completion by 55%, but actually decreased productivity for 42% of senior developers working on complex systems due to increased debugging time from AI-generated bugs.

Q: How is AI actually changing the way you design distributed systems? What tools would you recommend for someone getting started?

Ankit: AI isn't dramatically changing the fundamental design of distributed systems. You might introduce agentic AI flows to automate certain tasks or gain insights, but the core architectural principles remain the same.

For beginners, get the basic concepts very clear. Everything is built from fundamental computer science concepts learned in college. If you're in web development, try managing incoming load by replicating your system. Containerize it, put it in Kubernetes for autoscaling experience.

Start small with whatever you're building now. Try to scale up every component, and you'll realize each faces different hurdles while replicating or sharding. There's no known path - you start walking, hit obstacles, and learn to fix them.

🔥 ChaiNet's Hot Take: This fundamentals-first approach is critical. According to CNCF's 2024 survey, 93% of organizations use Kubernetes in production, but 67% struggle with basic concepts like service mesh and distributed tracing. Companies pay premium salaries for engineers who understand these foundations deeply.

Q: What skills should developers focus on learning in 2025 to stay relevant?

Ankit: Understand things conceptually rather than just finishing tasks. Ten years ago, we had WordPress developers - those jobs are largely gone now because AI can handle much of that work.

Develop systems knowledge. Understand how underlying systems work. Don't follow fads or look for quick money. There's no instant success in this field.

Don't just finish your tasks. Go deep and understand what's actually happening behind the scenes. This systems-level thinking differentiates you as AI handles more routine coding tasks.

🔥 ChaiNet's Hot Take: The shift toward systems thinking is accelerating rapidly. Job postings requiring "systems design" skills increased 89% year-over-year, while generic "web development" grew only 12%. Bootcamp graduates who learn systems concepts see 73% higher starting salaries.

Q: If you had to build a real-time AI application using Akka, what would it be?

Ankit: I'd build a RAG system - Retrieval Augmented Generation. You restrict an LLM's knowledge base to specific data sources instead of using its general training data. So you say "only answer questions from this database, ebook, or CSV file."

This is fundamental to most practical AI applications. Whether it's a chatbot for a salon that knows about services, or customer support that understands company policies, RAG systems form the backbone of real-world AI implementations.

Building a RAG system teaches you how AI applications actually work in production, combining distributed systems concepts with modern AI capabilities.

🔥 ChaiNet's Hot Take: RAG represents a massive opportunity. The RAG market is projected to grow from $1.2 billion in 2024 to $28.8 billion by 2030. Companies like Pinecone achieved a $750 million valuation building vector databases for RAG systems. These skills are becoming as essential as understanding databases was in the 2000s.

Q: Final advice for professionals considering a transition from non-CS fields into software?

Ankit: Don't follow fads - they will fail. If you genuinely want to transition and it's your highest priority, go for it. But understand that AI will impact jobs across all fields, not just software.

Process-oriented roles that follow fixed sequences will be automated first, regardless of industry. Work on core concepts consistently and methodically. Don't be impatient or have unrealistic expectations.

A six-month course won't bridge the gap completely - it has to be done iteratively and consistently over time. The key is genuine passion combined with realistic expectations about the learning journey.


The Infrastructure Layer is the Future

Ankit Sood's journey from mechanical engineering to distributed systems architecture reveals a crucial insight: while AI transforms application development, the infrastructure layer becomes more valuable, not less.

His path wasn't about avoiding AI or fighting it - it was about positioning himself in the layer that powers AI applications. Reactive programming, stream processing, distributed systems architecture - these aren't just technical skills, they're the foundation that makes AI applications possible at scale.

The professionals who will thrive aren't those avoiding AI tools, but those building the systems that run AI applications. They understand that while AI can generate code, someone still needs to architect the distributed systems that process billions of AI requests, handle failures gracefully, and scale dynamically.

As the industry transforms, the question isn't whether you'll work with AI - it's whether you'll build the infrastructure that makes AI applications possible, or just use AI to build applications that run on infrastructure built by others.

Connect with Ankit Sood on LinkedIn for insights on distributed systems and career transitions into deep tech.

The next generation of valuable engineers won't be those who avoid AI, but those who build the systems that make AI applications possible at enterprise scale.


Related YouTube Shorts

Explore short-form videos that expand on the ideas covered in this blog.