What we learned using AI in Marketing

  • Shikha Pakhide
  • May 30, 2025
  • AI
Image

Let’s cut to the chase - everyone’s been talking about AI, but we’ve deliberately stayed quiet until now. Not because we weren’t using it, but because we wanted to speak from experience, not speculation.

While headlines screamed about AI transforming everything and competitors rushed to publish hot takes, we were quietly experimenting behind the scenes. After more than two decades in marketing, we’ve seen enough hype cycles to know better than to jump on bandwagons before testing them ourselves. We decided to wait until we had something meaningful to share - actual results, real failures, and practical insights that might help others avoid the same pitfalls.

In our blogs, we’ve always preferred sharing what truly worked (and what didn’t) rather than chasing trending topics. So instead of theoretical possibilities, here are a few lessons from our sprints, stumbles, and small wins with AI as we’ve integrated it into our marketing operations.

Our messy adoption journey

Early excitement

When we launched the company in 2024, we didn’t want to rush into hiring a massive team. We needed to understand what skill sets were essential and how the ecosystem functioned. That meant using tools that could take mental load off our plates and let us focus on strategy.

Initially, it was absolute chaos, we were like kids in a candy store, testing everything at once. Gemini for this, Co-Pilot for that, another tool for something else. Switching between platforms multiple times a day with no real strategy.

Disillusionment

Looking back, we definitely rode the classic AI adoption curve. The initial excitement gave way to disappointment as we realized AI wouldn’t magically solve all our problems. The “trough of disillusionment” hit hard, we nearly abandoned AI altogether.

Strategic integration

Eventually, we found our rhythm. We got more selective and focused. AI went from being a novelty to a real asset in specific, repeatable tasks. Like any other tool we’ve adopted over the past year, the relationship matured into something sustainable.

No hype, just how our team actually uses AI

Shikha, Founder CEO

After months of experimentation with AI tools, I’ve discovered that the real value isn’t in the flashy capabilities everyone talks about, it’s in the quiet, consistent ways AI has become woven into my daily workflow. These benefits didn’t appear overnight; they emerged gradually through trial, error, and some genuine frustration. So, where AI delivers real impact for me: 

  • Research acceleration: As newcomers to building a company, AI became our crash-course tutor. It helped us compare business structures, condense hours of research, and present info in digestible formats. It didn’t decide for us, but it made us smarter, faster

  • Content development: This is where AI earns its keep daily. I use it to jumpstart brainstorming sessions, generate alternative angles on topics, and adapt messaging for different channels. The first draft is rarely perfect, but having something to edit is infinitely faster than facing a blank page

  • Industry intelligence: Running a marketing company means juggling clients across wildly different sectors. Before client calls, I use AI to build quick primers on their industry - key terminology, recent trends, common pain points.

  • Data pattern recognition: I feed our social engagement data to AI tools that help spot patterns I might miss, high-performing topics, optimal posting times, and audience-specific content themes. This bridges the gap between intuition and data-driven strategy, revealing insights that would take hours to uncover manually.

  • Marketing automation: Building marketing automation workflows means mapping edge cases and user paths. AI helps me think through these paths systematically and flags potential gaps in logic - saving us from costly missteps down the road

While AI has earned its place in my toolkit, there are clear danger zones where relying on it can backfire and there are boundaries which I’ve learned to respect.

  • Robotic tone : A subtle but persistent issue I’ve run into is AI-generated content that relies heavily on weasel words, repeated phrasing, and overly cheerful filler - sometimes even packed with unnecessary emojis. Without human editing, it starts sounding generic and robotic, stripping away the brand’s authentic voice. If no effort is made to refine the output, it quickly becomes mechanical - technically correct but emotionally hollow, and readers pick up on that disconnect fast.

  • Strategic thinking: AI-generated marketing strategies lack originality. When asked for campaign concepts, the results were painfully generic. It couldn’t capture what makes a brand unique or emotionally resonate with real people.

  • Relationship building: Marketing ultimately comes down to relationships, whether with customers, partners, or team members. The small personal touches, references to previous conversations, and authentic voice that builds trust over time can’t be automated. Relationships are still what matter most.

I don’t play favorites, but my day-to-day revolves around my three go-to assistants - ChatGPT, Perplexity, and Claude. Lately, I’ve been leveling up by building custom GPTs and exploring new use cases AI agents can help me solve.

Nivedita , UI/UX Designer

Being a UI/UX Designer, I’ve used AI to support my design work in many ways. I’ve used it to: find competitors, get color palette ideas, understand color psychology and pick accessibility-friendly colors, get design and layout ideas, clear doubts or questions at any stage of the design process, compare design tools and learn which one is better, and generate images.

I’ve widely used it for image generation and prompt generation for those images. Image generation through AI seems fascinating at first, but it’s hard and almost frustrating when you don’t get the styles right. I’ve tried many tools and finally settled on ChatGPT with DALL·E because it offered inbuilt GPTs and an integrated image generator.

I’ve generated multiple images for one section or one particular context with AI - out of which hardly 1–2 were finalized, or sometimes not even that. No matter how detailed my prompts were, how clearly I stated the desired style, or how many reference images I provided, the outputs would still lean toward a cartoon-like style that wasn’t suitable for my needs.

Two months ago, AI (ChatGPT) wasn’t like this. It showed that it listened, learned, and responded differently for each variation in the prompt. But for the past month, it’s been giving the same style of images no matter the prompt or GPT used. It promises to generate the correct image, but doesn’t - so ultimately, the trust I had in ChatGPT is fading, and I’m exploring other tools to get the job done.

No tool is completely user-friendly or reliable. Even though it’s AI, humans need to put in double the effort to match up with it.

AI keeps changing - and so must we. But as creatives, it’s hard to know what to rely on. Sometimes AI helps. Sometimes it betrays us. That’s the reality I’m facing right now.

Nitin, Co-Founder & Director

From my experience, different AI tools have their strengths - there’s no single one that does it all.

  • Perplexity is my go-to AI for data analysis, especially when dealing with both web-based and local datasets.

  • ChatGPT has been helpful for text creation and generating images - great for quick drafts, ideas, and visual content when needed.

  • Tried Gemini for coding - smart, context-aware help. 

  • GitHub Copilot looks promising as an AI agent for VS Code. I haven’t fully explored it yet, but it’s definitely on my radar.

  • Microsoft Designer could be excellent if it had better editing functionality.

  • Claude surprises sometimes with good coding but inconsistent.

  • Midjourney/Stable Diffusion are to be explored. 

Each tool plays a role. The real value comes from knowing when to use which one.

As useful as AI has been, there are times when it ends up wasting more time than it saves.

  • Inconsistent outputs – Sometimes AI gets it right, but other times it completely misses the point. I’ve had to redo tasks that were supposed to be “automated”

  • Blind trust backfires – AI chats and agents often sound confident even when they’re wrong. If you don’t verify the output, it’s easy to get misled or make bad calls

AI is powerful, but without oversight, it can quickly become a time sink instead of a time-saver.

Yashh, Web Developer

I have found AI to be highly effective for handling repetitive tasks. It helps generate boilerplate code and refactor code into smaller, reusable components. It performs especially well when working on small, specific chunks of code along with the context. I use it for quick debugging, technical research, and for simplifying new concepts.

AI becomes less reliable when trying to generalize larger pages into scalable, reusable templates. Sometimes, even after giving proper context and instructions, it keeps giving the same broken code again and again.

I think it struggles with newer frameworks like Astro, as it gave me React syntax multiple times - which Astro doesn’t support.

It’s a helpful assistant - it will simplify your tasks, but it won’t do them completely.

Bivek Paul, Digital Marketing Executive

Like many others, my AI journey started with simple tasks like writing copy, meta titles, and descriptions. So far, I’ve found AI to be most effective when it comes to content generation. ChatGPT has been particularly useful in crafting clear and engaging text, helping me get started faster and edit more efficiently.

As I began experimenting with AI for more complex tasks like data analysis, the results became less consistent. Sometimes the output is accurate and insightful, but more often than not, it misses key context, forcing me to double-check or redo the work manually.

Currently, I lean on Perplexity for in-depth research and ChatGPT for writing and basic analytical tasks. ChatGPT tends to follow the prompt closely, offering tailored responses, while Perplexity sometimes adjusts outputs based on broader context or previous prompts, which can be helpful, but also occasionally off-target.

My journey with AI has only just begun, and I’m still exploring its full potential. I’ll be sharing more of my experiences and lessons in the coming months.

Test first. Recommend later.

AI is just one more tool in our growing tech stack, not a replacement for people, but a powerful complement to them. We’re not here to chase every trend or automate for the sake of it. We’re a team of curious, hands-on practitioners who believe in testing things ourselves before recommending them to others.

From experimenting with prompt engineering to stress-testing AI workflows across design, code, content, and strategy, we get our hands dirty so our clients don’t have to. That’s the only way we can speak with confidence, cut through the noise, and offer smarter, sharper solutions.

After two decades in marketing, we know that tools will come and go, but the mindset of curiosity, adaptability, and critical thinking is what truly drives innovation. The fundamentals still matter: understand the person behind the data, deliver real value, and tell a story that feels honest - because that’s what people actually remember.

So, what have you learned while experimenting with AI? Let’s connect. Let’s share. Because we’re all figuring this out together.