Over the past few months, there has been a huge amount of hype and speculation about the implications of large language models (LLMs) such as OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude, Meta’s LLaMA, and, most recently, GPT4. ChatGPT, in particular, reached 100 million users in two months, making it the fastest growing consumer application of all time.

It isn’t clear yet just what kind of impact LLMs will have, and opinions vary hugely. Many experts argue that LLMs will have little impact at all (early academic research suggests that the capability of LLMs is restricted to formal linguistic competence) or that even a near-infinite volume of text-based training data is still severely limiting. Others, such as Ethan Mollick, argue the opposite: “The businesses that understand the significance of this change — and act on it first — will be at a considerable advantage.”

What we do know now is that generative AI has captured the imagination of the wider public and that it is able to produce first drafts and generate ideas virtually instantaneously. We also know that it can struggle with accuracy.

Despite the open questions about this new technology, companies are searching for ways to apply it — now. Is there a way to cut through the polarizing arguments, hype and hyperbole and think clearly about where the technology will hit home first? We believe there is.

Risk and Demand

On risk, how likely and how damaging is the possibility of untruths and inaccuracies being generated and disseminated? On demand, what is the real and sustainable need for this kind of output, beyond the current buzz?

It’s useful to consider these variables together. Thinking of them in a 2×2 matrix provides a more nuanced, one-size-doesn’t-fit all analysis of what may be coming. Indeed, risks and demands do  differ from across different industries and business activities. We have placed some common cross-industry use cases in the table below.

Think about where your business function or industry might sit. For your use case, how much is the risk reduced by introducing a step for human validation? How much might that slow down the process and reduce the demand?

The top-left box — where the consequence of errors is relatively low and market demand is high — will inevitably develop faster and further. For these use cases, there is a ready-made incentive for companies to find solutions, and there are fewer hurdles for their success. We should expect to see a combination of raw, immediate utilization of the technology as well as third-party tools which leverage generative AI and its APIs for their particular domain.

This is happening already in marketing, where several start-ups have found innovative ways to apply LLMs to generate content marketing copy and ideas, and achieved unicorn status. Marketing requires a lot of idea generation and iteration, messaging tailored to specific audiences, and the production of text-rich messages that can engage and influence audiences. In other words, there are clear uses and demonstrated demand. Importantly, there’s also a wealth of examples that can be used to guide an AI to match style and content. On the other hand, most marketing copy isn’t fact-heavy, and the facts that are important can be corrected in editing.

Looking at the matrix, you can find that there are other opportunities that have received less attention. For instance, learning. Like marketing, creating content for learning — for our purposes, let’s use the example of internal corporate learning tools — requires a clear understanding of its audience’s interests, and engaging and effective text. There’s also likely content that can be used to guide a generative AI tool. Priming it with existing documentation, you can ask it to rewrite, synthesize, and update the materials you have to better speak to different audiences, or to make learning material more adaptable to different contexts.

Generative AI’s capabilities could also allow learning materials to be delivered differently — woven into the flow of everyday work or replacing clunky FAQs, bulging knowledge centers and ticketing systems. (Microsoft, a 49% shareholder in OpenAI, is already working on this, with a series of announcements planned for this year.)

The other uses in the high demand/low risk box above follow similar logic: They’re for tasks where people are often involved, and the risk of AI playing fast and loose with facts are low. Take the examples of asking the AI to review text: You can feed it a draft, give it some instructions (you want a more detailed version, a softer tone, a five-point summary, or suggestions of how to make the text more concise) and review its suggestions. As a second pair of eyes, the technology is ready to use right now. If you want ideas to feed a brainstorm — steps to take when hiring a modern, multi-media designer, or what to buy a four-year-old who likes trains for her birthday — generative AI will be a quick, reliable and safe bet, as those ideas are likely not in the final product.

Filling in the 2×2 matrix above with tasks that are part of your company’s or team’s work can help draw similar parallels. By assessing risk and demand, and considering the shared elements of particular tasks, it can give you a useful starting point and help draw connections and see opportunities. It can also help you see where it doesn’t make sense to invest time and resources.

The other three quadrants aren’t places where you should rush to find uses for generative AI tools. When demand is low, there’s little motivation for people to utilize or develop the technology. Producing haikus in the style of a Shakespearian pirate may make us laugh and drop our jaws today, but such party tricks will not keep our attention for very much longer. And in cases where there is demand but high risk, general trepidation and regulation will slow the pace of progress. Considering your own 2×2 matrix, you can put the uses listed there aside for the time being.

Low Risk is Still Risk

A mild cautionary note: Even in corporate learning where, as we have argued, the risk is low, there is risk. Generative AI is still vulnerable to bias and errors, just as humans are. If you assume the outputs of a generative AI system are good to go and immediately distribute them to your entire workforce, there is plenty of risk. Your ability to strike the right balance between speed and quality will be tested.

So take the initial output as a first iteration. Improve on it with a more detailed prompt or two. And then tweak that output yourself, adding the real-world knowledge, nuance, even artistry and humor that, for a little while longer, only a human has.