Last year we embarked on an internal AI product research effort in the marketing software space with a goal of finding a product idea that would provide scale and repeatable revenue.  We had a few data-driven ideas we were kicking around based on our past experiences working with marketers, and we were anxious to determine the receptivity.  One of our partners in the marketing automation space agreed to collaborate on the research with us by sharing test data and ideas they heard from their customers.

By mid summer, we crafted a few very early AI models and our partner’s test data. We conducted informal research with various marketing agency leaders armed with our Jupyter notebooks. The feedback was very positive; but, we knew that too few data points can send anyone down an abyss of despair.

Before investing heavily, we set out to test our ideas and the market’s interest in a more formal fashion.  As a small company, we didn’t have the budget or time for an extensive research effort, and we knew that any one of the common research approaches used by startups and corporate innovation teams wouldn’t work fully based on our past experiences. 

So we borrowed ideas from many off-the-shelf methodologies and composed the process illustrated below:

The critical elements guiding our approach were speed and validation. We couldn’t afford performing endless research, and we didn’t want to invest in any substantial development efforts until we could validate that our ideas were likely to be successful.

Soon after hearing the positive feedback from a few marketing agencies, we pulled together a more formal product research team, developed a research plan, and got to work.  We conducted dozens of idea and concept testing interviews and performed secondary marketing research over the course of two months.  We ran the research efforts in an agile fashion where the combined team met weekly to review their findings and discuss the next week’s activities.

Several weeks into the research efforts a few ideas were already percolating to the top. In fact, lucky for us, the initial idea we started modeling in our Jupyter notebooks turned out to be one of our top ideas!  So, half way through the research process, we crafted a sales deck and pursued pilot customers to experience our best idea with their own data. 

We signed up three pilot customers within a month — all three were marketing leaders that were our research interview candidates!  And, one even paid us for the pilot.  We proceeded to test our models with pilot customers for several months then determined whether we should proceed with formal development.

We learned several key things through this process:

  • Digital marketers’ jobs are getting out of control — they are drowning in data but aren’t comfortable letting automation take over their work yet
  • The competitive landscape for AI-based marketing solutions is exploding and marketing leaders are inundated daily with vendors trying to selling them new solutions
  • Taking our AI product idea to market would require a significant amount of custom services in today’s marketing environment 

We determined:

  • The timing was too early for a product like ours to gain the traction we hoped for, 
  • Our marketing and sales costs would be enormous (more than we could afford) to compete for the marketer’s wallet, and 
  • We would need to build a substantial professional services organization around our AI product so customers would receive the value they expected.  

Due to these findings, we resolved to stop further research and development. Instead, we decided to “open source” all of our research so that we could openly share our process, tools, and learnings. 

Over the course of the next two months, we’ll be publishing more detailed articles as part of this AI Product Research Series.  

Let us know what you think.  And, if you have any stories to share, send them our way!