A Framework for Picking the Right Generative AI Project
Authors: Marc Zao-Sanders and Marc Ramos

There has been a huge amount of hype and speculation about the implications of large language models (LLMs) such as OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude, Meta’s LLaMA, and GPT-4. ChatGPT, in particular, reached 100 million users in two months, making it the fastest-growing consumer application of all time.
It isn’t clear yet just what kind of impact LLMs will have, and opinions vary hugely. Many experts argue that LLMs will have little impact at all (early academic research suggests that the capability of LLMs is restricted to formal linguistic competence) or that even a near-infinite volume of text-based training data is still severely limiting. Others, such as Wharton professor Ethan Mollick, argue the opposite: “The businesses that understand the significance of this change—and act on it first—will be at a considerable advantage.”
What we do know now is that generative AI has captured the imagination of the wider public and that it is able to produce first drafts and generate ideas virtually instantaneously. We also know that it can struggle with accuracy.
Despite the open questions about this new technology, companies are searching for ways to apply it—now. Is there a way to cut through the polarizing arguments, hype, and hyperbole and think clearly about where the technology will hit home first? We believe there is.
Risk and Demand
On risk, how likely and how damaging is the possibility of untruths and inaccuracies being generated and disseminated? On demand, what is the real and sustainable need for this kind of output, beyond the current buzz?
It’s useful to consider these variables together. Thinking of them in a 2 × 2 matrix provides a more nuanced, one-size-doesn’t-fit-all analysis of what may be coming. Indeed, risks and demands differ across different industries and business activities. We have placed some common cross-industry use cases in figure 3-1.
Think about where your business function or industry might sit. For your use case, how much is the risk reduced by introducing a step for human validation? How much might that slow down the process and reduce the demand?
The top-left box—where the consequence of errors is relatively low and market demand is high—will inevitably develop faster and further. For these use cases, there is a ready-made incentive for companies to find solutions, and there are fewer hurdles for their success. We should expect to see a combination of raw, immediate utilization of the technology as well as third-party tools that leverage generative AI and its APIs for their particular domain.
This is happening already in marketing, where several startups have found innovative ways to apply LLMs to generate content marketing copy and ideas and have achieved unicorn status. Marketing requires a lot of idea generation and iteration, messaging tailored to specific audiences, and the production of text-rich messages that can engage and influence audiences. In other words, there are clear uses and demonstrated demand. Importantly, there’s also a wealth of examples that can be used to guide an AI to match style and content. On the other hand, most marketing copy isn’t fact-heavy, and the facts that are important can be corrected in editing.
Picking a generative AI project
As your company decides where to start exploring generative AI, it’s important to balance risk and demand. One way to think about that is to ask two questions: “How damaging would it be if untruths and inaccuracies were generated and disseminated?” (risk) and “What is the real and sustainable need for this kind of output, beyond the current buzz?” (demand). Consider using this matrix—populated with common, cross-industry use cases—to identify the most valuable, least-risky applications for your company.

FIGURE 3-1
Looking at the matrix, you can find that there are other opportunities that have received less attention, for instance, learning. Like marketing, creating content for learning—for our purposes, let’s use the example of internal corporate learning tools—requires engaging and effective text and a clear understanding of its audience’s interests. There’s also likely content that can be used to guide a generative AI tool. Priming it with existing documentation, you can ask it to rewrite, synthesize, and update the materials you have to better speak to different audiences or to make learning material more adaptable to different contexts.
Generative AI’s capabilities could also allow learning materials to be delivered differently—woven into the flow of everyday work or replacing clunky FAQs, bulging knowledge centers, and ticketing systems.
The other uses in the high-demand/low-risk box above follow similar logic: They’re for tasks where people are often involved, and the risk of AI playing fast and loose with facts are low. Take the examples of asking the AI to review text: You can feed it a draft, give it some instructions (you want a more detailed version, a softer tone, a five-point summary, or suggestions of how to make the text more concise), and review its suggestions. As a second pair of eyes, the technology is ready to use right now. If you want ideas to feed a brainstorm—steps to take when hiring a modern multimedia designer or what to buy a 4-year-old who likes trains for her birthday—generative AI will be a quick, reliable, and safe bet, as those ideas are likely not in the final product.
Filling in the matrix with tasks that are part of your company’s or team’s work can help draw similar parallels. Assessing risk and demand and considering the shared elements of particular tasks can give you a useful starting point and help you draw connections and see opportunities. It can also help you see where it doesn’t make sense to invest time and resources.
The other three quadrants aren’t places where you should rush to find uses for generative AI tools. When demand is low, there’s little motivation for people to utilize or develop the technology. Producing haikus in the style of a Shakespearian pirate may make us laugh and drop our jaws today, but such party tricks will not keep our attention for very much longer. And in cases where there is demand but high risk, general trepidation and regulation will slow the pace of progress. Considering your own 2 × 2 matrix, you can put the uses listed there aside for the time being.
Low Risk Is Still Risk
A mild cautionary note: Even in corporate learning where, as we have argued, the risk is low, there is risk. Generative AI is vulnerable to bias and errors, just as humans are. If you assume the outputs of a generative AI system are good to go and immediately distribute them to your entire workforce, there is plenty of risk. Your ability to strike the right balance between speed and quality will be tested.
So take the initial output as a first iteration. Improve on it with a more detailed prompt or two. And then tweak that output yourself, adding the real-world knowledge, nuance, even artistry and humor that, for a little while longer, only a human has.
TAKEAWAYS
Generative AI is able to produce first drafts and generate ideas virtually instantaneously, but it can also struggle with accuracy and ethical problems. How should companies navigate the risks in their pursuit of its rewards?
✓ In picking use cases, companies need to balance risk (How likely and how damaging is the possibility of untruths and inaccuracies being generated and disseminated?) and demand (What is the real and sustainable need for this kind of output, beyond the current buzz?).
✓ A 2 × 2 matrix that plots risk and demand can help companies choose the best generative AI projects and improve their chances of success.
✓ Companies should run experiments that fit into the high-demand/low-risk box of the matrix. The other three quadrants aren’t places where companies should rush to find uses for generative AI tools.
Please Log in to leave a comment.