Back to Home
SEO Tactics That Will Survive the AI Revolution

SEO Tactics That Will Survive the AI Revolution

A deep dive with Mikul Saravan, founder of Vagu.ai, exploring how privacy first AI is reshaping enterprise security, why offline AI matters, and what happens when you paste confidential data into ChatGPT

August 21, 2025
14 min read
By Rachit Magon

Listen on

Are we paying to be spied on? It's a question most people don't ask when they paste their financial data, legal documents, or company secrets into ChatGPT. But maybe they should.

While the world races to adopt AI tools, a quieter crisis is unfolding. Samsung employees accidentally leaked confidential chip designs through ChatGPT. McKinsey banned AI entirely. Companies are feeding their most sensitive data to servers they don't control, trusting promises they can't verify.

Meet Mikul Saravan, a 20 year old Columbia student who's building something radically different: completely offline AI that never sends your data to the cloud. While his peers chase internships, he's competing against billion dollar companies by solving the problem everyone else is ignoring.

Key Takeaways: Privacy in the Age of AI

The Privacy Problem Nobody Talks About:

Offline AI is Now Possible:

Enterprise Reality Check:

Q: Are we really paying to be spied on?

Mikul: Yeah, we're indirectly actually are getting paid to get, we're paying to get spied on. If you think about it, OpenAI and all these other companies when you use their chat information, even especially if you're a consumer that don't pay for it or you do pay for the consumer version, all everything that you send is used to train their information.

One notable example would be like there's a company that I was working with, confidential information. They put it, they asked a question about it and obviously ChatGPT did not know so it said it didn't know and then literally a couple months later they asked a similar question and ChatGPT knew about it and no one else would know about it because it's completely internalized.

All this data is being stored and you can basically just have it access. Think about it, you ask ChatGPT a question, it all knows what you are, who you are, all everything you do. Its memory is pretty great but where's the memory stored? On their servers, public, almost like indirectly public. There was an issue where ChatGPT, all chats were actually indexed by Google as something you can search and find.

🔥 ChaiNet's Hot Take: The Facebook era taught us "if it's free, you're the product." But AI has broken that rule. Even paying customers are the product, their data feeding the very models they're paying to access.

Q: Walk us through what happens when I paste my bank statements into ChatGPT. What happens in the next 60 seconds?

Mikul: First thing is it gets sent to ChatGPT's servers and then ChatGPT will look at it, give you the answer. Okay, cool. You saw all the information, you got the answer that you wanted, you're happy, but now that your bank information, probably your social security number is now stored on their servers.

It's possibly going to be used to be trained within the next month or so on that data and there's a pretty decent chance that that social security number that you just uploaded to that information is now used to train and now is in that system, billions of parameters and now could be like a random number generated. Let's say generate me random social security number, it could actually be a real one, it could even be yours.

🔥 ChaiNet's Hot Take: Your sensitive data doesn't disappear after ChatGPT answers your question. It lives on servers, gets absorbed into training sets, and could theoretically be regenerated when someone asks for "random" examples.

Q: Besides Samsung, what are the biggest privacy disasters you've seen?

Mikul: Samsung obviously is the big companies, huge, but there's always a lot of things here and there. Obviously the early example that I mentioned with just confidential information. There's a lot of other companies, even like McKinsey, they completely banned AI, you can only use internal tools, which kind of sucks.

I know other companies, even tech companies that have access to AI, just completely banned because they're just worried. Those are preemptive so obviously you don't hear anything about those but a lot of these financial companies, these smaller ones, some of them are cautious about it but others are not that cautious and they're like okay they'll use it. You never know.

Most of the times these won't even be on the news. Most companies will be precautious, but you never know. It could just happen to you already. You just never know it. It could have happened to me already. I would have not known it. You never know. It's hard to tell. It won't make the news. But even if a big company does it, obviously they caught it, they made the news. But what about all those companies that do it, send information and also just don't get it, they just don't see the information, they never know that it got leaked.

🔥 ChaiNet's Hot Take: The scariest breaches are the ones you never hear about. While Samsung makes headlines, countless companies are leaking data through AI without ever knowing it happened.

Q: Don't these AI chatbots have separate memories for each user so memories don't get intertwined?

Mikul: Yeah, they obviously have, my memory would be completely different from what your memory would be. My responses actually be slightly different. Sometimes you'll get biased responses which is something that even ChatGPT can't fix. It's a trade off.

We've been storing information on the cloud for years now and so much privacy, so much security. So that's usually never the problem. Yeah, your memory is safe to me, my memory is safe to me, whatever, it's all privacy and safe. It's usually those messages that you send and that kind of information becomes a problem.

So obviously most, 99% of all the security concern that we've had of 10, 20 years of just the past of security, that's all been solved. That's not the issue. It's just very specific to the AI, not the platform, not the infrastructure. How does the AI handle information? Memory is obviously okay because that's encrypted and stores completely separately.

🔥 ChaiNet's Hot Take: Traditional cloud security is solved. AI security is a different beast entirely. The danger isn't how data is stored, it's how AI processes and learns from every message you send.

Q: We have HIPAA for healthcare, attorney client privilege for legal. How did feeding confidential data to AI become so normalized?

Mikul: I think it's just the whole rapid growth. It's happened so fast. You can't even stop it. It's almost like it's going to get better and better. Things are just going to completely change.

Obviously with all this compliance, companies are starting to worry about compliance. So there's two, obviously you mentioned the HIPAA compliance. Those things are obviously being factored in. So if you get the enterprise version then they say, at least they say that your information not being used to train, all that.

So all those compliance measures only protect the data and how it's stored and how it's encrypted on the server side, the client side. It's just something that obviously it's been done. So we know that's going to be defined. But we're trusting the word of OpenAI to be like, hey, all this information that I sent, I'm currently on the enterprise version. I have HIPAA compliance. Your data is not supposed to be used to be trained or anything. Okay. So we just trust it.

It's always a thing. When new technology comes, people get on and just use it. Concerns fall back a little bit. Obviously I'm kind of privy to this as well because there's less of an issue when things are so cool but it needs to come back, it needs to slowly become there and we're getting close but it's not fully there. Data is still data and it could get trained on.

🔥 ChaiNet's Hot Take: Enterprise compliance is built on trust, not verification. When you sign up for HIPAA compliant AI, you're trusting a company's word that your data stays private. There's no way to independently verify that promise.

Q: You built Vagu.ai as a completely offline AI assistant. How do you get AI capabilities without sending any data to the cloud?

Mikul: That's a good question and honestly it's a very valid concern. AI takes so many huge data servers to train and needs multiple GPUs just to answer your questions. How is this even possible?

The reason it's possible now is that we're able to take these huge models, condense all that information into a small model and what we call quantization and you can quantize these models. There's obviously a lot of open source models, OpenAI released actually their first open source models, bunch of other companies have been creating open source models so we just use those kind of models and quantize them down to be very small to work on your computer.

The reason why it doesn't make it to the cloud is if you were to run this on your computer, you can turn off your internet, you can be in the middle of nowhere on an airplane and still ask the question and get a natural language response. That is the proof that it's completely offline.

The way you do it is obviously to make these models smaller and they'll work on your device so that you can just whatever computer you're on, you load it and it runs on your computer using your own memory, your own performance. Nothing else gets impacted. Nothing leaves your computer that way.

We're able to hit pretty close to GPT 4 performance even on a model that's two gigabytes compared to a terabyte model or something else. It's a lot smaller and it's still able to reach that full level of performance because you're taking all that knowledge base and condensing it down into that smaller space.

🔥 ChaiNet's Hot Take: Offline AI proves the cloud isn't mandatory, it's a business model choice. Quantization lets you compress trillion parameter knowledge into laptop sized packages without sacrificing most capabilities.

Q: If offline AI is possible, what's the catch? Why isn't everyone doing it?

Mikul: The main issue is performance. Obviously I say it's similar to GPT 4 level performance but it is not the same as GPT 4. It is not going to hit that same level of accuracy and data handling. It can respond really well, answer questions for me, analyze things just not to that same level.

So the biggest thing is always performance. You just can't get the full level performance on a smaller model.

🔥 ChaiNet's Hot Take: The privacy versus performance trade off is real but shrinking. Today's offline models match yesterday's cloud models. As quantization improves, the gap narrows every month.

Q: Let's say I'm a lawyer working on a confidential case. How does this work with Vagu compared to ChatGPT?

Mikul: You basically have to just keep whatever you do completely into the offline system. You cannot, whatever you do, just send it to an local system and then whatever you do, all that information just stays locally because it never makes a call to the online system.

It all stays completely local and it just basically now stays private. Nothing leaves your computer. So if it doesn't leave a computer, you know for sure it's safe because then how else is someone else not supposed to access the information.

We're able to analyze data and information and text, whatever it may be, the case information, just completely on your device because the model thinks and does everything locally without ever leaving it.

Your use case, the way you use it does not change. It's like sending a message to the cloud, but then instead you're sending it locally. And sometimes after the first message, it's actually faster because there's no cloud dependency. It just runs local.

So for you nothing changes. Maybe the response be slightly different but nothing changes. On the back end obviously is where the magic happens. No cloud means full security and still the same performance.

🔥 ChaiNet's Hot Take: The lawyer's use case exposes the absurdity of current AI adoption. Attorney client privilege has existed for centuries, but we casually violate it by sending case details to third party servers.

Q: Are there differences in efficiency or correctness with offline models?

Mikul: I think it depends on the kind of the workloads. If you're querying document information, you're not going to see that much of a difference. You're asking basic questions, you're not going to see much of difference.

But for obviously for complex tasks, if you're trying to do a lot of coding, you will definitely notice the difference.

🔥 ChaiNet's Hot Take: Not all AI tasks are created equal. Document analysis and basic queries work great offline. Complex coding or reasoning tasks still benefit from cloud scale models. Know which is which.

Q: You're competing against companies with billions in funding who give away AI cheaply or free. OpenAI just launched a $5 plan for India. How do you compete?

Mikul: I think it's just you kind of have to, you're paying for the access of the ability to run things privately and locally and without the caveats that running local models, someone that's not a tech savvy person is extremely difficult. There's a lot of technical things you kind of need to know to set them up.

When they're set up you get a huge benefit. But most people don't have the time and expertise to do it. The way we do is we simply, we've done months of research on figuring out what the best model is, performance and analyzing to get to that state of here's a good model that we think you should be using completely offline. Here we go. Just use it. All you got to do is click download and there we go. You just use it. Nothing changes. We do all the work ahead of time. That's kind of the main benefit.

Obviously competing with these companies are going to be hard. We're not going to just stick with just being able to do local models. We're going to be able to do now, we're building much more beyond that.

🔥 ChaiNet's Hot Take: The business model isn't selling AI, it's selling privacy and simplicity. While OpenAI competes on price, Vagu competes on trust. Different games, different winners.

Q: As a founder, should you care about AI privacy at this stage, or is it just paranoia?

Mikul: I would say that there's still ways, obviously you want to move fast and just anything prototype wise you want to get a prototype but when you're launching to the public that's when you kind of have to care about this privacy.

So the very early stages, yeah you can let it go. Obviously when I built the very early version of it I didn't really care about privacy but then when before I launched the public that's when I really cared a lot and make sure it was secure and private.

So I would say for both founders, yes, you kind of do have to care about it. Think about the architecture. If you're thinking about how to build the product or build a company, you still privacy and security is going to be still a concern no matter what. And you kind of have to factor that in as a later step, but it has to be a step in the plan.

A lot of it's not very easy to code and build products and it's still relatively easy to care about privacy security. There's other companies that focus on getting compliance, getting that security level that you would need or building the local model. It's just available now. So it's something that you can be able to work towards before making public and you should care about it.

🔥 ChaiNet's Hot Take: Startups follow a hockey stick growth curve. If you don't bake privacy into the foundation, you'll face a painful retrofit when you suddenly have users and compliance requirements.

Q: The EU AI Act can fine companies up to 35 million euros for AI failures. Is privacy first AI becoming mandatory for enterprises?

Mikul: Yeah, I would say so. It's eventually going to become mandatory. Slowly, even with the iPhones and all that kind of stuff, Apple cares a lot and they have all these, EU also had data privacy concerns and things like that. So there it's definitely working towards that and it will definitely be a requirement.

For companies obviously they do just release AI but they do, and for most of them I'll give them credit, they do figure it out, they do get all the compliance. On paper, on our level what we can see, they are doing what they should be doing to make sure they manage, keep the privacy even with the concerns in mind.

There's only very very specific lawyers or financial people or healthcare that care about privacy to the next level where offline comes even way more important than just privacy concerns, trusted but verified.

Overall, as of now there's some concerns but I think it will be mandatory especially if the EU is moving this way. Other countries will follow.

🔥 ChaiNet's Hot Take: Regulatory pressure moves slower than technology but catches up eventually. The EU leads, others follow. Companies building for global markets should assume privacy requirements will only get stricter.

Q: You're 20 years old, studying at Columbia, and running a company. How do you manage both academics and being a startup CEO?

Mikul: It's tricky and people might not like what I say, but it actually just comes down to what do you care about more? I think at the very first obviously cared a lot about academics. I grinded through, should I get all A's, things like that for the first couple years, which it worked out.

At the same time, I had the entrepreneur mindset. I was talking to people trying to see what products I built a few things with my co founder in the past, obviously with the academics first but then it's a slight priority shift now. Towards last semester it was okay my academics went down a little bit. Obviously I still, I'm still one of top students there, okay great, but then you can't perfectly manage both and hence why people drop out, people take gaps.

So I will say that you can't, it's pretty difficult to manage both. It's what do you care about and my priority shifted and I'm in a way glad that it did. I'm much happier doing what I'm doing now than academics.

But it is still very much doable. Obviously as of now I'm going back to school in a week or so. We'll see if I actually stay in school. Things are growing much faster than we expected. So stay tuned on for the journey. You might see me not in school in a few weeks from now.

But as of now we're going back, still care about academics. It's just time management. How can you spend enough time to learn what you need to learn in school but then spend the rest in your startup and your company and also still have fun. It's a balance. It just comes out to time management and balance and it's something that you will kind of build over time.

You figure it out over time. I'm sure as you're older you kind of figure it out but it falls in place and I think it's very much doable. At the end of the day your priorities will shift and they should but as of now it's very just time management, waking up, doing school and then working on the company and then just doing that 24/7, spending as many hours as you need.

🔥 ChaiNet's Hot Take: The brutal truth about balancing school and startups: you can't optimize both simultaneously. Something gives. The question isn't whether to sacrifice, it's what to sacrifice and when.

Q: You've had multiple failed startups, a dream journal app and a distracted driving detection platform. What have failures taught you?

Mikul: It comes down to a lot of little things of just how you move forward, how you build, how you think about products. It's not necessarily the ideas themselves, but it's also the mindset you build over time of experimenting and building new products and things like that.

You kind of learn a lot and through especially not just from a product sense but from a business perspective as well and it just, obviously there's a lot of things, they failed because of various reasons. We were not ready, it's not something we cared about, passionate enough about, people were not ready.

So you kind of learn these little things then you start improving them on as you go forward and it's product level, business level, personal level. It's a whole range of things. If we haven't done those things we don't know where we would be right now and obviously even now we're iterating really fast and changing really fast but the core model stays the same about privacy and security.

In the past the core model was obviously different things. Dream journal was understanding yourself so the core stuff should still stay the same. It's just how you think about everything else. There are a lot of learnings and I think those learnings helped shape where we are.

🔥 ChaiNet's Hot Take: Failed startups aren't wasted time, they're compressed learning. Each failure teaches you what doesn't work, narrows the field, and builds the pattern recognition you need to spot what will work.

Q: If you have one piece of advice to founders using AI tools right now who haven't thought about privacy, what should they do tomorrow morning?

Mikul: It sounds kind of counterproductive, but go to ChatGPT and ask it about privacy because as much as you might, those questions might be used to be trained on, information that it knows is going to be factual about. So that's how I learned about it. So have a nice conversation with ChatGPT and understand privacy. It's going to pull in real sources and it's going to just answer everything you need to know.

And yes, it answers the question that I'll ask. Just talk to AI and ask. It's counterproductive as it sounds. It's going to tell you whatever you need to know. Ask it what is privacy mean? How do I care about privacy? What are different metrics I can use? What are products that are privacy centric and what's in their company documentation that privacy is important? What are ways I could implement the compliance using Vanta or whatever it may be?

Just ask all the questions, the burning question privacy you have. The fact that you're even doing that is obviously a great step and towards and that's the direction you want to go. Just literally just ask AI what you should do about privacy, how I can care about it. Ask not just as a company perspective, if you're building an AI tool, great, most of the times I'll be honest you probably privacy won't matter as much but when it does, how do I get to that privacy level? What kind of enterprises do I have to work with versus not work with?

If I'm using AI on my own, what products are privacy focused versus not? And just learn about as much as you can and you can just get the information that you would be able to use.

🔥 ChaiNet's Hot Take: The irony of using AI to learn about AI privacy isn't lost on anyone, but it works. ChatGPT will accurately explain its own privacy risks if you ask the right questions.

Q: What do you think about voice and AI? Is that the future or just a fad?

Mikul: Voice is much, we're having conversation here. We're not typing. Right now chat bots are chatbot that locked into little window you have to type to it. We saw this a little bit with our Vagu desktop assistant where it follows you around and knows what you are seeing and responds accordingly. So we're building towards that.

But voice is much easier to be able to talk and get things done. And right now, there's no operating system, if you will, for AI. There's no way of interacting. It's all chat bots. That's not very intuitive. It can run AI agents in the background. Okay. But you don't even know what's happening.

How do we get to that state of future things, I want a Jarvis like assistant that I can just talk to and it tells me everything I need to know. It can act for me, things like that, book my Uber, smart homes and smart assistants are not there yet and voice is going to be the future because it's just so easy to talk and get things done rather than typing at a screen or clicking buttons to orchestrate different things like sending an email.

I think that's why I think the future is headed there. There is no operating system for AI that connects all your tools together. Obviously privacy is really important because you want to make sure that everything you're connecting together is private and secure and done in a way that your data is not being sold or stolen but that is I would say the way the future is.

We're not there just yet. We're building as a company, we're building towards that voice first goal with privacy in mind. 24/7 personal assistant is our name. So we're building a desktop assistant. We're going to build the next set of voice agents but with privacy and security in mind, with compliance in mind that these highest level professionals that care about privacy can still use and get the most out of AI just by talking to it.

They could actually be ahead of the curve because now they're using voice and AI talking just to get tasks, sending emails, sending a hundred emails, scheduling meetings, doing whatever, getting information for your upcoming meeting really really quickly than anyone else could. And that's I think going to be huge and it's going to be in the future. I would say quite soon actually.

The operating system of AI is going to be voice and it's also going to be privacy first. So stay tuned.

🔥 ChaiNet's Hot Take: Chat interfaces are temporary scaffolding, not the final form. Voice is the natural interface humans have used for millennia. The race is on to build the AI operating system where voice is primary, not an afterthought.

Final Thoughts: The Privacy Revolution Nobody's Talking About

The AI revolution is here, but so is the privacy crisis. While everyone celebrates productivity gains, a quieter question looms: at what cost?

Mikul's journey from failed startups to building privacy first AI shows that the future doesn't have to be a choice between capability and privacy. Offline models are closing the performance gap. Quantization is making local AI practical. Regulatory pressure is making privacy mandatory.

For founders, the message is clear: privacy isn't paranoia, it's prudence. Build it into your architecture from day one because retrofitting privacy into a hockey stick growth curve is painful and expensive.

For enterprises, the EU AI Act is just the beginning. Compliance will get stricter, fines will get larger, and the companies that treated privacy as optional will pay the price.

For everyone else, the next time you paste sensitive data into ChatGPT, ask yourself: would I be comfortable with this becoming public? Because in subtle ways you can't see or verify, it might already be happening.

Q: Is there anything else you want our listeners to know?

Mikul: Thank you for having me. It's been great to talk about the future of AI and privacy and also voice and privacy.

I would say just no matter what kind of person you are, just keep doing what you're doing. Make sure you care about privacy and security and also, it sounds kind of random and not connected but because of AI you don't want to continuously be talking to AI, you want to talk to your friends and make sure you build those connections, those meaningful connections and do what you do, get at it and just keep doing whatever.

Just make sure you stay focused, stay dedicated, keep those things that you value the most in mind of what you're doing. What is the core that is going to stay the same no matter what? I would say just keep doing that and it's going to be great. We would love to connect with you guys and see what you guys are doing as well. It's going to be a great journey for all of us.

Final words: The future of AI doesn't have to involve trading privacy for productivity. Offline AI proves we can have both. The question isn't whether privacy matters, but whether we'll demand it before the next major breach makes headlines.


Related Shorts

Explore short-form videos that expand on the ideas covered in this blog.