Contact Us

24 / 7 Support Line: + (123) 1800-567-8990

Our Location

USA, New York - 1060 Str. First Avenue 1

Where AI comes in handy (and where it doesn't)

As published in Forbes

We are still in the early days of the AI revolution. Many organizations are still playing around with this technology and figuring out how to work it into their operations. Teams are starting to determine what today’s large language models (LLMs) do well and where they come up short. The technology is so new and complex that for now we’re in what technologists refer to as the “capability overhang”—the protracted process humans undertake to understand the hidden capabilities (and dangers) of new tech.

As the founder of a content marketing agency, I’ve spent countless hours trying to understand these new models and figure out how they can make writers’ jobs easier. In short, I’m intrigued—but with caveats. This is obviously important technology that could lead to more and better content. But it’s still early days, and my vision for AI in a marketing context comes with reservations given the technology’s current shortcomings.

So, now that we’re months out from the launch of ChatGPT-4, I want to walk marketers through the areas where I believe AI is remarkably useful and some areas where it probably won’t help you much.

Benefits

• Generating theories: In a recent interview, Greg Jensen, the co-CIO of investment management firm Bridgewater Associates, discusses how “language models are good at certainly generating theories, any theories that already exist in human knowledge, and putting those things connected together. They’re bad at determining whether they’re true.” In this way, the models act as a sort of jumping-off point for thinking more deeply about ideas. In a marketing agency context, this could help teams generate possible ideas for a campaign or help think through a creative problem. But it won’t start and finish your whole campaign.

• Helping with decisions: Today’s AI models are total workhorses when it comes to analyzing large data collections. Their ability to quickly notice patterns and extract information makes them a useful tool for any business that has lots of decisions to make (so, every business). Is AI ready to completely replace executive decision-making? Of course not. But it’s another viewpoint to have in the room. Think of an LLM as an extremely quick-thinking virtual assistant that can analyze good data and then generate ideas worth considering.

• Generating volume: While businesses are still figuring out how to work AI into decision-making, it’s already making a difference on marketing teams. The main difference between GPT-4 and previous models is just how much content this new AI technology can generate. In a matter of seconds, you can have a lengthy first draft of a blog post, white paper or even an entire marketing campaign. Just remember that these are first drafts and you’ll need humans to revise, fact-check and expand upon whatever AI generates. You’ll also need clever prompting to make sure that the first draft is even workable.

• Modeling ideas: Nowhere is AI’s impact more mind-blowing than in the realm of biology. Already, scientists are using AI to model new proteins in detail. The possibilities this presents for treating, preventing and even curing disease are profound and exciting. Beyond biology, almost any industry can use AI to quickly generate a mockup of a proposed design. Once again, this shows that the technology is best at quickly combining and distilling information in a way that makes human work easier.

Challenges

• Getting facts right: Perhaps the most glaring shortcoming of today’s generative AI models is their tendency to be confidently wrong. AI models often draw from information on the internet. And as we know all too well, the internet is full of plainly wrong information. When AI models form conclusions based on incorrect information, they begin to “hallucinate” and blurt out misinformation. This means that, for now, AI models require a great deal of human fact-checking and don’t yet serve as an ideal research aid.

• Structuring an argument: Recent research into AI output comes up with a general conclusion: AI models are great at generating meaningful sentences, but aren’t yet structuring meaningful arguments. (Look no further than this AI-generated court filing that looked legitimate but was supported by entirely specious, invented examples.) AI models will often follow predetermined argumentation schemes that they’ve seen elsewhere, but the “meat” of the argument is lacking. While it’s impressive that AI is creating arguments with beginnings, middles and endings, these arguments are rarely original or persuasive. Once again, humans will need to intervene with their own critical thinking to strengthen AI output. This will often involve fact-checking AI-generated content or adding original research to AI-generated outlines of arguments.

• Writing in an original voice: Social media has been full of examples of AI models accurately emulating the writing of Shakespeare, T.S. Eliot or even Jerry Seinfeld. AI does well when it takes in a large collection of data (e.g., hundreds of episodes of Seinfeld) and then mimics it. But AI models haven’t really figured out how to combine these known voices into something original and new. AI’s written output is either in its default friendly voice or amounts to an impressive impersonation. This means that the world of original media content might be a few years away from total AI disruption.

• Reasoning ethically: While this might be a dark note to end on, it’s worth remembering that AI isn’t operating from a consistent ethical framework. It doesn’t naturally observe privacy or surveillance concerns, and it comes with built-in biases. Blindly outsourcing all business decisions to AI models would have disastrous consequences. AI must always be a precursor to human ingenuity, offering the information and ideas humans need to make better decisions.

The use cases for AI are still emerging. In reviewing what AI does well and where it comes up short, a pattern emerges: AI helps humans with their jobs but can’t replace us. We don’t yet know what GPT-5 will bring or when that day will even come. But for now, teams that understand what AI does well (and doesn’t) are poised to work smarter and faster.