Relevant but Wrong: Using AI to Create Trick Questions
In this episode of AI Unveiled, Gaurav Kotak speaks with Bram Degeyter, VP of Product, and Jeroen Minnaert, Chief Software Architect from Showpad, a sales enablement platform with over 1,000 customers.
Jeroen and Bram are Showpad veterans and are leading the charge of AI innovations at the company.
We first discuss how, in the longer term, they believe every organization will have a customized language model that describes their value proposition and all sales content and collateral will be created on the fly by prompting the model.
Creating content on the fly at zero marginal cost has multiple implications. The role of product marketers will shift from crafting pixel-perfect content to prompting the model and building audit and feedback mechanisms to evolve the model. Since it’s the first time that content is not curated or reviewed, it’s important to create guardrails including policies of what can be shared with customers and consistent design principles to clearly indicate this in the user interface.
Bram also candidly discusses how incumbents like Showpad face some risk of being disrupted as content goes from static to dynamic. At the same, they have the opportunity to leverage their data and workflow to drive this shift.
We hope you enjoy this episode that covers both the big picture of where sales enablement is heading, as well as specific design and technology choices to build a scalable platform.
* 4:40 – The future of sales enablement
* 22:03 – Leveraging their AI-powered search index
* 26:32 – The realities of non-curated AI-generated content
* 37:22 – Personalizing LLMs
* 41:24 – Measuring success in your AI capabilities
BRAM De GEYTER: One of the things that will become less and less relevant is a lot of the enterprise content management tools. So a lot of the tooling around bringing in the content, tagging that content, preparing it for distribution, putting context around that.
I think that will become less and less relevant. And especially for our largest enterprise customers, you’re starting to push scale in terms of content management. So a lot of those sorts of scaling issues I think might actually disappear. So that definitely presents a tricky balance because short term, we still need to actually invest in the scalability of that.
Longer term, I actually expect that instead of, because well interestingly it’s almost like knowledge within a sales organization has always been the sales collateral. It’s been sort of stored in documents and that’s been your sort of digital source of truth. It’s kind of been that way because that’s the way that documents or knowledge can be stored. And I think what we see right now with LLMs and definitely sort of fine-tuning of LLMs, what I think will actually happen is that the core of that system will gradually shift towards a sort of customized personalized knowledge system.
So a large language model that’s fine-tuned. For one specific customer for one organization and will actually be the carrier of that knowledge.
GAURAV KOTAK: I saw a press release recently where you had launched a lot of new AI capabilities, what are they?
JEROEN MINNAERT: This is really our first step of getting AI into our product but in a responsible way. We don’t just want to bring AI for the sake of bringing AI. We want to really align it with the use cases that our customers are choosing ShowPad for.
There’s one key theme that we apply when thinking about which features we would be looking for. And it’s this notion that whatever we do—we focus on use cases that are hard to do right now, but easy to verify, which means that for you to do manually those things would take a very long time.
So you generate a couple of test questions and generate some right answers, which is that FAQ technology. But then the key part is we also generate wrong answers. So we generate relevant, wrong answers for you. And I think that’s also super interesting to explore because if you run through the exercise in your head, it’s not just about generating any answer.
It’s about generating something that is relevant that could be interpreted as wrong, but obviously isn’t the right answer to the question. And that makes a good training, right? That makes a good test, right? If you can kind of trick the user to pick the right answer, really test them on their knowledge without giving obvious wrong answers.