Back to Home
The $30 Deep Fake Crisis: How a 20-Year-Old Is Building the Trust Layer of the Internet

The $30 Deep Fake Crisis: How a 20-Year-Old Is Building the Trust Layer of the Internet

A deep dive with Tarini Sai Padmanabhumi, 20-year-old founder of Axory AI, exploring how deepfakes cost $25 million in a single scam, why detection is harder than generation, and what happens when anyone can create fake identities for less than the price of pizza

November 17, 2025
15 min read
By Rachit Magon

Listen on

Thirty dollars. That's what it costs on the dark web to create a deepfake of you. Your face. Your voice. Your entire identity. Less than the price of pizza.

January last year, an employee at a Hong Kong engineering firm joined what seemed like a normal video call. The CFO was there. Several colleagues. Everyone looking normal, sounding normal. The employee proceeded to make 15 wire transfers totaling $25 million.

Every single person on that call was fake. Not one person. Not two. The entire meeting. The CFO. All the colleagues. Everyone.

Welcome to the synthetic media crisis nobody saw coming.

Today's guest is Tarini Sai Padmanabhumi. She's 20 years old. She's the founder and CEO of Axory AI, building what she calls the trust layer of the internet. She's been named one of India's top 20 under 20. She's a two-time TEDx speaker. And she's dedicating her life to fighting a threat most of us don't even know exists yet.

While deepfake generation costs $30, banks are spending millions on detection and still failing. While voice cloning takes less than $20 a month, enterprises are losing tens of millions to synthetic identity fraud. And while 99% lab accuracy sounds impressive, 60% real world accuracy is basically a coin flip.

This isn't about celebrity deepfakes or political misinformation anymore. This is about AI-generated Aadhaar cards bypassing KYC. This is about elderly parents receiving video calls from their children begging for emergency money. This is about the internet becoming so synthetic that we'll need a safe word just to verify our own family members are real.

Key Takeaways: The Synthetic Crisis Already Here

The Economics Are Broken:

The Attack Vectors:

The Detection Problem:

Q: You're 20 and running a deepfake detection company. How did you realize this was the problem you needed to solve?

Tarini: For some context, I've been in the AI space much before the ChatGPT boom. It started when I was around sixth or seventh grade. It was called machine learning then. My parents were in this space, so I naturally had an inclination from a very young age.

What really pushed me to build in this specific space was actually about a year ago. I was on a solo trip to this one place, and the real reason for this trip was for me to self-discover or something like that. But the entrepreneur inside me was like, I have to ask everybody this one question, which is what is the biggest problem that they're facing?

I expected normal answers from most people, but I got a very interesting answer from one of my closest friends there. She mentioned how she was scammed by a deepfake video. She lost, this was in Vietnam by the way, she lost about more than a million dong, like multiple million Vietnamese dong, which is thousands of Indian rupees.

When she told me this, I was first of all kind of shocked because I only thought, I thought somebody of my age would be able to decipher between what's real and what's fake. But it was that level of scam that had come about.

Coincidentally around the same time I was looking at opportunities that I could build in the deep tech space and this came along because I realized that there was a problem here. So I started talking to people in this space, cybersecurity experts, companies wanting to build in this. Coincidentally I started my R&D work.

I started researching in this space, seeing what is the scope that can be done as a solution here. And I realized the kind of financial implications people are having, by people I mean enterprises. So it's not just individuals like you and I, but enterprises are wanting the solution ASAP.

That's when I started, and of course started with deepfakes alone. Now we're doing anything AI-generated content detection. We're an R&D intensive team at the core, and that's what we bring to the table to solve this issue at hand.

🔥 ChaiNet's Hot Take: Most founders stumble into problems through personal pain. Tarini went looking for problems by asking everyone she met what they were struggling with. That's not accident, that's methodology. The entrepreneur who actively hunts for pain points finds opportunities others miss.

Q: As a layman, how do I prevent my deepfake from being created?

Tarini: So as such, right now fortunately when we're talking about big accessible tools, there are guardrails. You cannot create deepfakes without their consent, without the person's consent. But there are tools, open source tools which people are using primarily for this purpose.

And the unfortunate thing is if your picture is anywhere on the web, then they can make it. It's that easy. So that is the unfortunate truth and although we can't change that, I think one way we can sort of keep ourselves privy to all of these deepfake videos, or rather understanding what is a deepfake and what is not a deepfake, is just generally keeping up to date with the latest state of the art models when it comes to generation.

How do they look? There are some signature elements that are there for an individual user to be able to identify it. Although every day the new models, they become better and better and it becomes hard for you to identify that.

I think that becomes a sanity check for you before you verify whether any content is real or fake. But our goal is to make this tool accessible for everyone. Although we're starting with enterprises right now, we're working with companies all across media, all across cybersecurity, identity verification, because today you can use a deepfake to bypass KYC, and that's a very big problem.

Even things like online listings, all of those, we cover all of this. The idea is to make it accessible to the public and that's what I hope I'm able to deliver very soon.

🔥 ChaiNet's Hot Take: If your photo is on the internet, you're already vulnerable. LinkedIn profile pic? Instagram selfie? Facebook photos from 2015? All training data for someone's deepfake model. Prevention isn't possible anymore. Detection is the only defense.

Q: Walk us through the $25 million Hong Kong case. How sophisticated was that attack?

Tarini: This was a Hong Kong-based firm. An employee was succumbed to a deepfake attack. Basically someone impersonated one of the CXOs and they essentially scammed or made the employee transfer over about $25 million over 15 wire transfers.

This person was completely scammed by the fake voice of someone who seemed like their CFO. And he believed it. You listen to your CFO, right?

This was literally January of last year. Since then, the kind of level of deepfakes we have has increased, has only increased, and it's increased tenfold. Especially when it comes to voice, it's become much, much deeper and much, much better.

State of the art models in voice, maybe one year ago you would say there are some improvements that can happen. But today you're seeing these basic models from across the world that people are just building out of their dorms. It's that easy today for so many people.

The state of the art has become much better, so the level of harm that can happen is even much, much bigger. Today you have to be much more cautious about what you're watching and what you're seeing because the level of realisticness is much, much higher.

🔥 ChaiNet's Hot Take: Fifteen wire transfers. Fifteen separate moments where the employee could have verified. But when your CFO and colleagues are all on video saying it's urgent, human psychology overrides security protocols. The attack wasn't just technical, it was social engineering amplified by perfect synthetic media.

Q: Why is the economics so broken? Creating deepfakes costs $30, but banks spend millions on detection.

Tarini: That's an interesting question, but I think it kind of goes to show the level of open source we have when it comes to generative AI also today, as compared to maybe deepfake detection.

The reason deepfake detection as a whole is not so much of an open source thing yet is because it's a constantly evolving space, as is generation. But then the problem we're trying to solve with detection is we're trying to solve for detection of all these models, as compared to generation just providing something for you to tinker around with.

If we're able to detect, our idea is we want to be able to detect all, or at least most. I think that is where there is a big difference.

It's a research intensive space because of the amount of refreshes you have to put onto your entire approach towards detecting. But I would still say today we're at a level where we're able to detect a good amount of these. It's just a matter of time until the next one comes. So you have to just gear up for that.

🔥 ChaiNet's Hot Take: Generators win by fooling you once. Detectors lose if they miss anything. One side plays offense with open source tools. The other plays defense with constant research updates. The asymmetry is brutal, and it explains why a $30 attack can bypass a million dollar defense.

Q: What should I actually be scared of? What's the most likely way a deepfake attack would hit me personally?

Tarini: The one thing I mentioned, these scams where somebody calls as a relative or a close friend, that is a very common thing. And then the other common thing we see today is people making deepfakes of celebrities, famous celebrities, saying, basically pushing them to put money into these scam portfolios.

There are a lot of fakes on the internet today with Narendra Modi saying things like you should invest in XYZ. These are the most prevalent types of deepfake scams that we see today.

On an indirect level, KYC is a big thing. There's also much bigger things when it comes to enterprises. So all of these are use cases. But when we're talking about individuals like you and I, I think these are the biggest problems.

For example, if you get a call from someone pretending to be your mother, maybe have systems in place like have a safe word in place that you and your mother have. Or if it's a close family member, it's generally good to have such a word in place so that you know any sort of scam that is happening with you, you are able to evade it.

These are general safety measures to prevent yourself from being the victim of such a deepfake.

🔥 ChaiNet's Hot Take: Banks have million dollar security budgets. Your best defense? A safe word only you and your mom know. Sometimes the simplest solutions work better than sophisticated ones. But the fact we need safe words to verify our own family members shows how broken digital trust has become.

Q: Was building in India a conscious decision? Most AI startups are in the US or UK.

Tarini: We started off in India. First initial customers were in India. So naturally, I was 20 years old when I started it, which was not too long ago, just a couple of months ago. So at that point I started off in India thinking that would be the best way for me to at least get started. I just wanted to get my solution out into the market. That was my most important thing.

But our as a company, our aim of course is global. So we want to cater to the global market and at the end of the day we're a global company in the terms of the way we look at providing a solution to the world.

Our entire team is based in India, but at the same time our approach is global and that's how we look at it.

🔥 ChaiNet's Hot Take: Build in India, build for the world. That's the new playbook. Lower costs, access to technical talent, and a massive domestic market to validate on before going global. India isn't just outsourcing anymore, it's where frontier tech gets built.

Q: Aadhaar processes over 10 billion eKYC transactions. What do you think of India as an opportunity and also as a target for scams?

Tarini: India is a very huge market. Clearly, we are the biggest population of the world. There is no doubt in the number of KYC that happens in India per year.

But I think the problem, if we're talking about as a whole, is universal. You see this scam happening across the universe. But definitely India is a big use case and you see these scams happening every now and then in India. There is a big problem to solve for.

So as a market, India is a great place to build for when it comes to this because you have such immense volumes of these scams to be able to solve for. And to be able to capture that market is obviously a big deal.

So all of this said and done, I would say that while the problem is universal, I think India is a great place to be building for.

🔥 ChaiNet's Hot Take: Ten billion eKYC transactions a year means ten billion attack surfaces. India's digital infrastructure scale makes it both the biggest opportunity and the biggest target. Solve for India's volume and complexity, and you've solved for most of the world.

Q: Are there vulnerabilities in banking and BFSI that none of us know about yet?

Tarini: As a corollary to the deepfake thing as a whole, we extended our purview to AI-generated content detection. Today you can generate Aadhaar cards using AI, you can generate identity cards using AI, and that's become a big form of scam today.

Very huge. People don't realize this. So deepfakes, I think people are getting to know. AI-generated content related scams are something that people don't know about yet, and that is a huge thing that is going to disrupt this industry, is already disrupting the industry.

So that is what we're solving for as well. When we started off with deepfakes it was just for that, and once I got deep into the space, I realized that this problem is much bigger and naturally we extended our view. This is something that I think a lot of people don't know about, so I would love to bring awareness to that.

🔥 ChaiNet's Hot Take: Everyone's worried about deepfake videos. Nobody's talking about AI-generated identity documents. You can create a fake Aadhaar card, a fake passport, a fake driver's license, all synthetic but passing visual inspection. The KYC fraud wave hasn't hit yet, but it's coming.

Q: There are deepfake detectors with 99% accuracy in the lab and 60% in the real world. Why is it so difficult?

Tarini: Sixty percent accuracy is really not a product at the end of the day. You're not really giving any solution. It's just a test that you tried out, a project.

The way we are solving it differently, I have to emphasize on this, it's such a research intensive space. And I've been in this space for so long now, I know, as compared to someone who's just entering the space, I know a lot more than they would. I have the insights.

My team has been wired to solve these problems. We have certain processes in place, systems and processes in place to be able to tackle the latest state of the art model.

At the core of it, if you look at it in first principles, it's just the bricks of it are basically just your systems and processes that you have in place. On a high level, yeah, it's research, it's development, really heavily doubling down on research. Having sleepless nights just trying to get that accuracy benchmark that you have.

But outside of that, at the core of it, it's just you having those systems and processes in place.

🔥 ChaiNet's Hot Take: Lab conditions are controlled. Real world is chaos. Different lighting, different compression, different angles, different generation models. Sixty percent accuracy in production means your detector is slightly better than random chance. That's not a product, that's a science experiment.

Q: You left placements to build this. What should college students do if they want to follow your path?

Tarini: Firstly, this is a very risky thing I did because, at least in the conventional sense of risk, I left my placements. I officially signed out of placements, which is why my college allowed me to take my last year off for pursuing my startup.

So you have to take that decision because I come from an engineering college, so I can speak for it. Placements are the biggest thing and your three years of college lead up to placements for most people.

You have to get rid of that placement dream. You have to know that that's not your path first. If you're clear with that, I think the rest will naturally follow. But that's a big roadblock most people have. They say, I want to be an entrepreneur, I want to do this, I want to do that.

By now we all know it's not a glamorous world, it's obviously not. But that question has to be clear. Am I ready to completely cut off placements? Do I not, am I ready to get off the job path and completely devote my time into building this vision of mine? If that question is clear, I think rest all is clear.

For me that question was clear and I went for it. Of course you have to satisfy your other stakeholders. Especially if you're under 18, you have to talk to your parents and get them on board. Honestly in India I would say even a lot of over 18 kids, you have to. There are obligations.

But if you're able to, and you show conviction and you show proof, I think most things work out, is my opinion. This is what I can tell you out of my experience.

But the biggest question is always, can I remove that big thing I've dreamt of for these many years and just devote all my time into this? And then it works out if it does.

🔥 ChaiNet's Hot Take: Indian engineering colleges are placement factories. Three years building toward that one outcome. Breaking free requires killing that dream completely, not hedging. Tarini didn't keep placement as backup. She signed out officially. Half commitment means half results.

Q: When did you first think you didn't want a regular career path?

Tarini: I had this from a very young age. When I was young it started off with, okay, being the leader, being like, I always liked creating things, building things out of nothing. That gave me such a sense of accomplishment, it gave me a drive. And I'm always, my biggest thing in life is I'm driven to something and that gives me so much dopamine.

From a young age I knew that was kind of my path. But once I got into my university and everything, when I started doing some more projects and I started getting an insight of how different worlds work had I chosen a different path, it obviously became clearer to me why I would do this rather than anything else.

And it was always for large impact for me. I knew I wanted to create something that at least a billion users will use. That is my goal. Large scale impact has always been a part of my journey. So that's where this whole thing sort of froze.

🔥 ChaiNet's Hot Take: Some people discover entrepreneurship in college. Tarini always knew. Creating things from nothing gave her dopamine hits that normal careers couldn't match. The billion user goal isn't delusion at 20, it's clarity. When you know the destination, the path becomes obvious.

Q: There's this concept called the liar's dividend, where politicians claim real videos are deepfakes. Does Axory help with that?

Tarini: That's a very great question. In fact, one of the organizations we're working with, they actually work in this space to verify whether a given piece of content, especially by a famous politician or a famous figure in the country, someone with influence, whether their piece is real or not.

It's a problem in many ways because you're making something which is not true, true for your benefit. So yeah, this is obviously a very big problem. Fact checkers that we work with or media organizations that we work with, we work with them on these lines itself.

And as a company, that's where the product gets a use case. And also as a person I would say, it's important to be able to detect what is real and what's not because of this kind of issue, more so.

🔥 ChaiNet's Hot Take: Deepfake technology creates two problems. Synthetic content passing as real. And real content dismissed as synthetic. The liar's dividend means every politician caught on camera can now claim deepfake. Truth becomes unfalsifiable. That's not just a tech problem, it's an epistemological crisis.

Q: Do you think big companies like Meta or X are already solving for this?

Tarini: For sure. Every technology that comes about, you have the big companies, small companies, everyone solving for it. If not, they would just buy the organization solving for it.

Definitely there is scope. It's not that they're not solving for it. But this is something that is going to take up resources, time, and a lot of effort. So it depends on what each organization views its goals to be.

Because for some of them it's detrimental to be solving this because they are actually building the generatives. For some companies, they would naturally not want to be solving for it. There is all of that. So it's a space that you can't talk about unless you completely know the dynamics.

🔥 ChaiNet's Hot Take: Meta built the generation tools. Why would they aggressively build detection? Incentives matter. The companies creating synthetic media have conflicting interests with those detecting it. That's why independent detection companies have a moat big tech can't easily replicate.

Q: Where do opportunities lie for entrepreneurs building in AI over the next 3 to 5 years?

Tarini: On the generative side, there's definitely a lot of opportunities. It's where the boom is right now. I've seen a lot of Indian startups as well build their own models, their own voice models. These are very cool folks that I met personally.

So technologically speaking, the inner technologist in me, it's nice to see that we have come from a stage where it was just a bunch of artificial neural networks to where it is now.

But at the same time, I think there's a sense of responsibility that all of us have to create. That is where my purview comes from, which is at the end of the day, if this is causing real financial harm to someone or harm to organizations, that's where the use case of detection comes in.

I think there's a lot of scope everywhere today with AI.

🔥 ChaiNet's Hot Take: Generation gets the hype and the funding. Detection gets the responsibility and the necessity. Both are opportunities, but they're fundamentally different plays. One is about enabling creation, the other about preserving trust. Choose based on what you want to build, not just what's hot.

Final Thoughts: The Trust Layer We Need

Tarini's journey from seventh grade machine learning to building deepfake detection at 20 isn't about precocious talent. It's about recognizing an existential threat before everyone else does.

The synthetic media crisis is already here. Thirty dollars to clone someone's identity. Twenty dollars a month for voice cloning. Twenty-five million dollars lost in a single video call scam. AI-generated Aadhaar cards bypassing KYC systems. The internet becoming so synthetic we need safe words to verify our own family.

This isn't a future problem. This is happening now.

For builders, the lesson is clear. While everyone races to generate better synthetic media, almost nobody is building the trust layer that makes digital identity reliable again. That asymmetry is the opportunity.

Generation is open source and accessible. Detection is research intensive and constantly evolving. One side plays offense, the other plays defense. The economics are broken, but that's exactly where value gets created.

India processes 10 billion eKYC transactions a year. Each one is an attack surface. Each one is an opportunity to build detection systems at scale. Solve for India's volume and complexity, and you've built for the world.

The companies that win won't just detect deepfakes better. They'll rebuild trust in digital identity itself. That's not just a product. That's infrastructure for the internet's next layer.

And it's being built by a 20-year-old who asked everyone she met what problems they were facing, found one nobody else was solving, and decided to dedicate her life to fixing it.

Q: How can people connect with you and learn more about Axory AI?

Tarini: If there's any companies that are interested in the space of identity verification or any companies that have any form of listing wherein fake listings can happen, be it food delivery companies or even companies in the space of cybersecurity that want to reach out to us, or media companies, all of these, do hit me up at tarini@axory.ai.

Or if you're a person just wanting some advice, how do I upskill in AI, how do I get into this race, or just want to join our vision at Axory, do reach out again on tarini@axory.ai or maybe my LinkedIn, Tarini Padmanabhumi, or X, @tarini_axory.

Final words: The deepfake threat isn't something in the future. It's already here. Thirty dollars can buy your identity. Twenty-five million can disappear in a video call. And the internet is becoming so synthetic that we need safe words to verify our own family members are real. This isn't science fiction, it's Tuesday. Thankfully, there are people like Tarini building defenses before most people even understand the attack. The trust layer of the internet is being built right now. And it's being built by founders who saw the problem before it made headlines. Stay curious. Stay skeptical. And verify that video call before you wire the money.