When the generative AI hype fades

By now you’ve used a generative AI (GenAI) tool like ChatGPT to build an application, author a grant proposal, or write all those employee reviews you’d been putting off. If you’ve done any of these things or simply played around with asking a large language model (LLM) questions, you’ve no doubt been impressed by just how well GenAI tools can mimic human output.

You’ve also no doubt recognized that they’re not perfect. Indeed, for all their promise, GenAI tools such as ChatGPT or GitHub Copilot still need experienced human input to create the prompts that guide them, as well as to review their results. This won’t change anytime soon.

In fact, generative AI is big not so much for all the exam papers, legal briefs, or software applications it may write, but because it has heightened the importance of AI more generally. Once all the hype around GenAI fades—and it will—we’ll be left with increased investments in deep learning and machine learning, which may be GenAI’s biggest contribution to AI.

To the person with a GenAI hammer

It’s hard not to get excited about generative AI. On the software developer side, it promises to remove all sorts of drudgery from our work while enabling us to focus on higher-value coding. Most developers are still just lightly experimenting with GenAI coding tools like AWS CodeWhisperer, but others like Datasette founder Simon Willison have gone deep and discovered “enormous leaps ahead in productivity and in the ambition of the kinds of projects that you take on.”

One reason Willison is able to gain so much from GenAI is his experience: He can use tools like GitHub Copilot to generate 80% of what he needs, and he is savvy enough to know where the tool’s output is usable and where he needs to write the remaining 20%. Most lack his level of experience and expertise and may need to be less ambitious with their use of GenAI.

We go through a similar hype cycle for each wave of AI, and each time we have to learn to sift realistic hope from overreaching hype. Take machine learning, for example. When machine learning first arrived, data scientists applied it to everything, even when there were far simpler tools. As data scientist Noah Lorang once argued, “There is a very small subset of business problems that are best solved by machine learning; most of them just need good data and an understanding of what it means.” In other words, however cool it might make you look to develop algorithms to find patterns in petabytes of data, simple math or SQL queries are often a smarter approach.

In like manner, Diffblue CEO Mathew Lodge recently suggested that GenAI is often the wrong answer to a range of questions, with reinforcement learning offering greater likelihood of success: “Small, fast, and cheap-to-run reinforcement learning models handily beat massive hundred-billion-parameter LLMs at all kinds of tasks, from playing games to writing code.” Lodge isn’t arguing that generative AI is hype. Rather, he’s suggesting that we need to recognize GenAI as a useful tool for solving some computer science problems, not all of them.

Trickle up GenAI economics

If we step back and look at AI broadly, despite GenAI’s outsized impact on media hype and corporate investments, it occupies a relatively small area within the overall AI landscape, as Nvidia engineer Amol Wagh captures. “Artificial intelligence” is the broadest way to talk about the interaction of humans and machines. As Wagh details, AI is a “technological discipline that involves emulating human behavior by utilizing machines to learn and perform tasks without the need for explicit instructions on the intended output.”

Does generative AI fit in there? Sure it does, but first comes machine learning, a subset of AI, that refers to algorithms that learn from data to make predictions based on that data. Next is deep learning, a subset of machine learning, which trains computers to think more like humans, using neural network layers. Finally comes GenAI, a subset of deep learning, which goes a step further to create new content based on inputs.

Again, taking a quick look at Nvidia data center spend and seeing it skyrocket in response to GenAI, or looking at GenAI’s impact on Vercel adoption, it would be easy to assume that GenAI is the end game for AI. GenAI is definitely having a moment, but it’s very likely—almost certain—that this moment will pass.

This isn’t to suggest that GenAI will fade into relative obscurity like Web3 (remember that?) or blockchain (sorry to bring up bad memories). Rather we’ll become more realistic about where it fits and where it doesn’t within a much larger AI landscape. Sure, we can allow Massimo Re Ferré to wax rhapsodic about GenAI’s “tectonic” impact on computing. In his mind, we are “just scraping the surface of what GenAI can do” in a GenAI-driven future with “experts moving 10x faster and 10x more non-experts gaining access to IT in a way that they could not imagine with the interfaces we have today.”

Sure. Some variant of that future is possible, even likely. But GenAI is a subset of a subset of a subset of AI, and for me, it’s the larger AI picture that is more interesting and impactful, even if all our attention is on GenAI for the moment. This moment will pass. If along the way, GenAI reminds us of just how much potential AI, machine learning, and deep learning have, and we invest accordingly, then it will have been well worth the hype.

Copyright © 2023 IDG Communications, Inc.

Source