The HITL Maturity Curve From ChatGPT to Enterprise AI

  • Nick Donaldson

    Nick Donaldson

    Senior Director of Growth, Knak

Published Oct 28, 2025

The HITL Maturity Curve From ChatGPT to Enterprise AI

Summary

You're not behind on AI. Learn the three maturity stages from individual ChatGPT use to programmatic systems, and how to progress deliberately.

We've all experienced the AI maturity curve, even if we can't quite put our fingers on it. Maybe it's the fatigue of copying and pasting the same prompts into ChatGPT or Claude. Maybe it's the top-down pressure to "adopt AI," even when the directive isn't entirely clear. Go forth and use ChatGPT is not a directive that translates into smart usage for your specific job.

OpenAI reports more than 800 million weekly active users, with over 4 million developers using their platform. This isn't just tech enthusiasts and early adopters. It's the entire tableau of the business and creative world using AI to improve their work.

But here's the disconnect: if the mandate to use more AI is confusing you, you're not alone. A recent MIT study revealed that 95% of generative AI business efforts are failing, with only 5% achieving meaningful revenue growth. We're not cashing in on the promise of AI. So what's holding us back? Understanding the maturity curve and how to progress.

The gap between where you are today and those impressive case studies isn't as wide as it seems. You're already in these tools. You already know how to use them to get positive outcomes. But you may have hit a ceiling. You're thinking: this is repetitive, the copy-pasting is killing me, it feels like I'm just moving between tools to copy and paste more. What we really want is a partnership with AI that delivers the results the hype cycles have forecast.

The three stages of AI maturity in marketing

This isn't meant to be a definitive framework, just a thought exercise to help you understand where you are and where you might go next.

Stage 1: Individual exploration

Stage one is about getting good with your own prompting. You're learning to use applications like ChatGPT and Claude to accomplish your objectives, whatever those may be. You're building a mental library of what works and discovering that specificity matters, examples improve outputs, and iteration beats trying to nail it on the first try.

These skills are foundational, but they reach a peak. That peak is when you realize you can't scale this approach. The copy-pasting becomes the bottleneck. You're productive individually, but the process doesn't extend beyond you.

You know you're ready to move beyond stage one when the same prompts keep appearing in your workflow, when you wish your team could access what you've learned, or when managing prompts takes more time than using the outputs.

Stage 2: Team tools and shared systems

Stage two is when you and your team are becoming expert-level users together. You're not just sharing prompts internally anymore. You're using custom GPTs, Claude Projects, and shared resources to make sure everyone has the context needed to operate effectively.

At this stage, the AI becomes less of a personal assistant and more of a team resource. You're documenting what works, creating templates others can use, and establishing quality standards. The knowledge that lived in your head is now captured in systems your whole team can access.

The shift here is organizational, not just technical. You're moving from individual tricks to team processes. You're treating AI outputs like any other deliverable, with review processes and quality checks.

You know you're outgrowing stage two when your custom tools are being used constantly, when multiple people depend on them, and when the manual review process becomes a bottleneck. You start thinking: if we could just automate this next part, we could really scale.

Stage 3: Programmatic and agentic systems

Stage three is when you start thinking about how to integrate AI into your workflows in a programmatic, mostly automated way. The human role shifts from doing the work and copying-pasting to training AI systems to make those judgment calls themselves.

Does this require coding? Yes and no. Custom application development lets you interact with AI programmatically using APIs to various LLMs. Working with internal developers to build custom solutions becomes valuable. But there are also plenty of platforms today that handle workflow automation at scale: Make, Zapier, and similar tools let you scale your maturity without building everything from scratch.

The other component is agentic systems within the tools you use every day. Salesforce has powerful agentic AI capabilities, and you can expect these developments to continue across marketing platforms. The AI isn't just generating content anymore. It's triggering actions, routing work, and making decisions within parameters you've set.

This is where you see the dramatic efficiency gains from case studies. Coca-Cola generating hundreds of localized assets. Starbucks personalizing offers to millions of customers. These outcomes are possible because AI operates systematically, not ad hoc.

But here's the critical part: stage three is built on learning from stages one and two. McKinsey research shows only about one-third of organizations have reached transformative maturity in their marketing technology. You can't skip ahead. The companies succeeding with programmatic AI started with people experimenting individually, learned what worked, built team tools, and only then invested in larger systems.

The jobs-to-be-done framework

A useful way to think about progression is understanding what job you're hiring AI to do at each stage:

Individual use:

  • Help me brainstorm ideas
  • Give me a first draft to edit
  • Summarize research
  • Generate options I can choose from
  • Success metric: Did it save me time and improve my output?

Team tools:

  • Help our team produce consistent quality
  • Capture institutional knowledge
  • Reduce variance in how different people approach tasks
  • Success metric: Are multiple people using this effectively? Is quality more consistent?

Programmatic systems:

  • Personalize content for thousands of segments
  • Produce localized variations across markets
  • Optimize in real-time based on performance
  • Success metric: Are we reaching more customers? Is conversion improving? Are we more efficient at scale?

When you frame it this way, the question isn't "should we be building enterprise AI systems?" It's "what job do we actually need done, and what's the simplest solution that accomplishes it?"

Building on solid foundations

AI valuations are approaching levels seen before the dot-com crash. Whether this signals growing pains in the AI industry or a larger systemic issue, it doesn't resolve the fact that AI is being used more and more in our day-to-day work. The mandate from executive teams is to utilize these systems. But we also want to anchor ourselves in a grounded position.

Where are you at? Having your tools set up properly feels like a good first step on the maturity curve. Here's what matters:

Clean data is the big one. Make sure your CRM and marketing databases are clean, organized, and have the fields needed to do the AI personalization you're dreaming about. If your data is messy, AI will amplify those problems rather than solving them.

Clear brand guidelines. Documenting your brand voice allows you to scale it through AI. When one person is using AI, they can apply judgment about brand fit. When a team is using shared tools, those standards need to be documented. When you're building programmatic systems, guidelines need to be explicit enough to inform automated decisions.

Tool orchestration. Think about how outputs move between platforms. A tool like Knak operates between your CRM, design tools like Figma, project management tools like Monday, and marketing automation platforms like Marketo or HubSpot. Having these connections in place creates a conduit for value. AI is ultimately about outputs. Where do those outputs go? They don't go into the ether. They follow along the human-built processes that connect your systems together.

Human judgment at every stage. Whether you're editing a ChatGPT output or reviewing assets from an enterprise system, humans make the calls about what's on-brand, strategically sound, and good enough to ship. This doesn't change as you mature. What changes is where in the process that judgment gets applied.

The teams that figure this out aren't necessarily the ones with the biggest AI budgets. McKinsey found that 34% of organizations cite under-skilled talent as a key hurdle, and 47% struggle with stack complexity and integration challenges. The winners are the ones willing to experiment, iterate, and build processes that augment human capabilities rather than trying to eliminate them.

You're not behind

Here's what matters most: you're not behind. You're not falling behind. But to scale AI up, we need to think about how we approach this in a smart, reasonable way.

Every company succeeding with AI at scale started exactly where you are now. They had people experimenting individually. They built team tools to share what worked. They invested in programmatic systems only after proving value at smaller scales.

The competitive advantage doesn't go to teams that race to build the most sophisticated systems. It goes to teams that learn effectively at each stage and make smart decisions about when to level up.

If you're at stage one, get really good at stage one. Build your prompt library. Learn what AI handles well and what needs heavy editing. This knowledge makes stage two possible.

If you're at stage two, focus on getting team tools right. Document what works. Establish review processes. Train your team. Build confidence that outputs meet your quality bar. This foundation makes stage three viable.

If you're ready for stage three, start small. Pick one high-value workflow. Build it, test it, learn from it. The enterprises with impressive AI implementations didn't build everything at once. They built iteratively.

The maturity curve isn't a race. It's a learning journey. Most teams are still at stage one or two. McKinsey's research confirms this: 65% of organizations remain in early maturity stages. The ones making it to stage three put in the work at every stage before it.

You're right where you need to be, as long as you're learning and progressing deliberately. And given the 95% failure rate of AI projects, moving slowly and learning deeply might be the smartest strategy of all.


Share this article

  • Nick Donaldson 2025 headshot gradient

    Author

    Nick Donaldson

    Senior Director of Growth, Knak

Why marketing teams love Knak

  • 95%better, faster campaigns = more success

  • 22 minutesto create an email*

  • 5x lessthan the cost of a developer

  • 50x lessthan the cost of an agency**

* On average, for enterprise customers

** Knak base price

Ready to see Knak in action?

Get a demo and discover how visionary marketers use Knak to speed up their campaign creation.

Watch a Demo
green sphere graphic used for decorative accents - Knak.com