Forget the Hyperbole, Here’s Three Ways AI Could Actually Impact Communication

This week I had a chance to catch up on developments in AI. For the uninitiated, I’m talking about the difference between traditional AI that focuses on routine, predetermined tasks, and the newer Generative AI that creates new content based on huge sets of data.

Through the fog of hyperbole, this is a difficult area to get one’s mind around. Even many top technologists don’t understand what happens inside the “black box” of a program like ChatGPT or

Carl Bergstrom and Jevin West underscore that we can at least look critically at what goes into and comes out of such programs. It doesn’t take any knowledge of advanced algorithmic computations to know that chatbots that unexpectedly curse at customers have bad inputs. There’s an abyss of ongoing, greater concerns about AI, including issues of plagiarism, data privacy, deepfake events that laws are having a hard time catching up with, and the potential of partial or incomplete data to amplify racial and gender disparities.

While we need to keep a watch on these developments, I also see many positive applications ahead. Here’s three key insights for communication skills development alone that are worth tracking.   

1) The possibility for AI to raise and democratize skill development could be high. GPT 3.5 had a 10% pass rate on the bar exam, while GPT 4 is at 90%. GPT 4 is in the 93rd percentile of SAT reading and writing questions. AI has also entered realms where quality communication is most needed. By many measures involving medical reasoning and patient conversations, The Articulate Medical Intelligence Explorer (AMIE) has “greater diagnostic accuracy and superior performance” when compared to primary care physicians.

      Most important, Ethan Mollack shares a research finding that AI raised the performance of less skilled participants on a baseline task substantially more than those who were more skilled going into the challenge. Mollack shares,

      “Looking at these results, I do not think enough people are considering what it means when a technology raises all workers to the top tiers of performance. It may be like how it used to matter whether miners were good or bad at digging through rock… until the steam shovel was invented and now differences in digging ability do not matter anymore. AI is not quite at that level of change, but skill levelling is going to have a big impact.”

      Anyone trying to advance educational equity and inclusion should take note. (From what I’m seeing, AI might also work hand in hand with the use of VR, which stands to make repeated practice in a variety of communication skills more accessible than ever before. More posts coming on this later).

      2) AI will flatten and make accessible a lot of cost-prohibitive and time-consuming communication tactics. The creation and use of video is a huge cost at the top of organizations (think about the marketing department putting out a video with actors, professional editing, etc.). Just today I received a sales email asking if I wanted any video produced, with the explanation that “Most projects range from $7K-20K per project depending on a few different factors (shooting days, script, number of actors, etc).” For many, such a large investment of time and resources puts video out of reach.

      Or the rest of us use simple video for meetings via Zoom, Microsoft Teams, and more. I learned from some reps at Synthesia that staff and others haven’t used it much for a whole middle range of activities involving white papers, pdf documents, Excel files, PowerPoints, emails, and other mostly textual communication products.

      The use of Generative AI can turn text into video using realistic avatars, localize content in different languages, and take out much of the labor that once went into producing professional videos. When one looks at Robin Defelice’s overview of how long it takes to develop content for teaching and training alone, it’s clear that these chasms will soon be crossed.

      3) We won’t know whether to approach AI as cyborgs, centaurs, or what I’d call “tech-nos” without organizations paying systematic attention to these differences. Depending on the task, Fabrizio Dell’Acqua and colleagues say we can approach AI as “cyborgs,” fully integrating AI into the work that we do. The use of Grammarly incorporated into a Word document comes to mind, where AI is part and parcel of a writing task at hand.

      The authors further describe “centaur” approaches, where human tasks are separated from AI tasks, as in writing a blog post and then asking AI to create a stand-alone image based on that writing. There’s a “jagged frontier” to Generative AI that makes it great at some tasks but surprisingly terrible at others (e.g. it’s good at writing poems, but they’ll tend to be less than 50 words; it can generate ideas but it’s not very good at some simple math, etc.; see the following image via Mollack).

      Jagged Frontier

      For the time being, this means we’ll have to get choosy about exactly when and in what ways to use AI. And for that, organizations will need to invest in inclusive training, deliberation, and decisions that clarify AI’s uses. To lead through those changes, organizations will need to treat this as an adaptive, not a technical challenge involving multiple stakeholders over time. For some tasks, we’ll need to play the role of “tech-nos,” leaving actions outside of AI’s abilities to the side. These will likely involve many core communication and leadership skills, such as one-to-one, emotionally and culturally intelligent dialogues.

      Yes, there’s a lot of exaggeration and even a clear market bubble on AI. So it’s critical to ramp up conversations about AI’s actual uses. To get yourself further down that road, with searches of over 12,000 AI apps, check out “There’s an AI for That.”