DALL•E generated image

I've ventured deep into the artificial intelligence (AI) ecosystem for the last 14 months. I've replicated my voice to turn written articles into podcasts; created artwork (and animations) out of thin air; built a custom GPT that only answers questions about marketing new homes based on everything I've ever written or said; and so much more.

I share this so that you clearly understand that what I am about to tell you comes from firsthand experience, not from a fear of the unknown. AI has incredible future potential, but it has yet to be ready to accomplish most of the tasks futurists or experts would like you to believe it is.

I want to divide this article into two parts—what AI can do right now and what it can't (yet) do, and why it is essential to stay within the current abilities of this technology. If you are considering using AI tools that are consumer-facing, please pay close attention.

What AI Can Do Well—Right Now

Inspiration: Generative AI serves as a muse for the modern creator. It's about more than just generating content; it's about sparking the flame of creativity in areas we might have yet ventured into.

Whenever you use DALL·E, Midjourney, or Stable Diffusion to create an image, several unique concepts are created, each embodying the idea in a new way (the image for this story was created by DALL·E, which “read” the article and created it).

This isn't about outsourcing creativity entirely; it's about expanding our creative horizons. A human still needs to review the output and determine whether what was created fits the desired purpose. Getting yourself or your team creatively "unstuck" has become much more manageable.

Summarization: In our digital age's vast sea of information, AI serves as an invaluable lighthouse, guiding us toward the essence of what we are seeking. Take, for example, the task of digesting hundreds of research papers on a specific topic.

An AI tool can synthesize these into a coherent summary, highlighting the key findings, trends, and even areas of contention. This capability doesn't just save time; it transforms an overwhelming amount of data into actionable insights, making it possible for us to stay informed and make decisions based on a broad spectrum of information.

The summary often falls under "good enough" rather than perfection. We must determine which topics are mission-critical to understand deeply or if a summary hitting most of the main points will do. After all, you aren't internalizing what you are learning in the same way when only viewing a summary.

Variation: AI's ability to offer variation is like having an endless palette of choices when you thought only primary colors existed. I know of more than one home builder who uses AI tools like ChatGPT, Jasper.ai, and others to rewrite responses from their warranty department.

The variations are not merely rephrasings but entirely new approaches, each maintaining the core message but with a fresh twist that might be important to the customer's situation. This ability to generate diverse options from a single prompt ensures that content remains vibrant and engaging, preventing the staleness that often comes with repetition.

Again, humans must still review each and every response before sending them out. One builder in Texas shared that its AI always wanted to accept full responsibility for everything—even items clearly not part of the home warranty.

Pattern Recognition: Pattern recognition prowess is the lynchpin of AI's current abilities. It doesn't just discover the obvious patterns; it uncovers the subtle, hidden connections that escape the human eye. This level of analysis, finding the needle in the haystack at scale, is something that it can do with astonishing efficiency—and pretty good accuracy.

The findings are to be viewed as suggestions for further study rather than concrete conclusions of fact. The upside is that your limited resources can be funneled to the discoveries with the most promise rather than digging through the entire corpus of data using hours of human labor.

What AI Can't Do (Yet)

Intelligence: Studying AI systems that understand and generate language shows us a daunting challenge in making software think like humans. Even though these programs can do things that seem intelligent, like talking or writing like a person, they don't understand what they're saying or feel anything.

Intelligence is about something more than remembering or mixing information in new ways. It's about really "knowing" things, thinking things through, learning from what happens, and handling new situations in a way that makes sense. People are brilliant not just because we can think but because we understand the world around us, make wise choices, and feel emotions deeply. These are things that AI can't do yet.

When we observe AI, we see how fantastic it will one day be, but we need to understand its current limits better. As I said above, it can create new things, develop creative ideas, summarize vast amounts of information, and offer different takes on topics much faster than a person could. But, it does all this by following rules set by humans without really "getting" what it's working on.

AI works by spotting patterns and dealing with data, not by deep thought or being aware of itself like humans are. This difference matters not just in theory but in how we use AI in real life. As we try to do more with AI, we need to remember these limits and the importance of having people "in the loop." This term refers to the learning cycle and how we cannot yet allow these systems to function without human review and adjustment being part of it.

Truth: A large language model is not trying to lie to you when you get incorrect information from it. It doesn't know what truth is—and can't reason its way to find it. This is a massive problem for everyone—technological beings and humans alike.

"Beware The Top Google Search Result. It Might Be Wrong" is a fascinating article from The Wall Street Journal about how content made by AI seems to be correct by uninformed humans but is not.

The signals Google uses around content to help determine rankings are click-through rate, session duration, and event tracking. None of those are "truth" metrics. They are metrics that show how valuable humans think something is to them. When humans don't know better—and don't know AI was involved in creating something—they are more likely to trust it as being correct.

The irony is that humans often aren't thinking as critically as they should be, either. We skim a lot. We read headlines without ever reading the article. We don't try to recreate the math; we assume the answer is correct. Humans are lazy, which means we don't want to check the work of AI either. To fact-check the work of AI, humans have to expend energy and have expertise on the topic to edit the output correctly.

ChatGPT can recommend a stunning itinerary and travel plan for your two-week vacation to Europe that will blow your mind with detail and insight. Unfortunately, you won't realize something is incorrect or outright fantasy until you are four days into your trip and are stuck with the consequences of not double-checking each step of the recommendation.

For now, a human MUST be in the feedback loop or double-checking the work of an AI system anytime it is doing important work. Please make sure your definition of important work is correct before hitting "launch" on that AI test.

Be left alone: Any process you infuse with AI today requires a human babysitter with enough energy and expertise to keep it on track. Even beyond the challenge of intelligence, the core systems themselves are not yet stable enough for scaled use across mission-critical systems. ChatGPT, one of the most widely used AI tools, went "temporarily insane" for several hours on Feb. 20.

Most of the use cases of AI that really get senior executives' attention would require the intelligence, trustworthiness, and autonomy that current AI tools do not yet deliver. The most positive effect of the current generation of tools is that individual contributors have been able to augment their abilities and capacity to get the job done. Scaling from individual team members to enterprise-level impacts is something we have to be a bit more patient about.

Do you have extra resources to keep experimenting and testing with the bleeding edge systems? Fantastic! As long as you don't expose those systems to your customers and keep the humans in the loop, you'll be prepared to move quickly when the next breakthroughs in AI capabilities are made.

Keep the conversation going—sign up to our newsletter for exclusive content and updates. Sign up for free.