AI writing tools do not (yet) include artificial common sense, accuracy or tact

One pitfall of new technology is people quickly come to rely on it to perform functions it simply doesn’t do. I recall the spellchecker gaffes decades ago, when optimistic writers assumed a document was error-free because they’d hit “spell check.” A book author was once very surprised when I found an embarrassing error — see, his spellchecker was capable of finding sequences of letters that were not words, but sometimes a typo produced something that was still, technically, a word.

This respected academic had left the “l” out of the word “public,” resulting in a much more salacious sentence than intended.

Today’s artificial intelligence, or AI, can produce whole novels, so writing a few blogs, articles or marketing descriptions would be an easy lift, right? Well, we’re finding out that it’s still a lift that requires a human spotter.

Humans have a fantastic ear for tone, which can be very subtle. This guide on Grammerly.com goes over four basic types of writing: expository (setting forth facts), descriptive (setting a scene, focusing on the senses to help reader imagine something), persuasive (trying to influence or sway readers), and narrative (telling a story). Within any marketing or public relations writing, there might be all four of these types, plus a tone that reflects the client’s brand.

Tone is not as simple as jeans versus slacks: checking a box for “casual” or “business” does not cut it. It involves empathy for the audience, knowledge of what messaging is already out there, and the common sense to know adjectives used to sell a new car should not be used to sell a funeral urn.

AI writing fails have been making headlines all year, starting in January when CNET “paused” their use of AI for articles. A Gizmodo story on the debacle pointed out the AI-created content was “basic financial explainers transparently intended to garner clicks through SEO,” and contained “an alarming number of errors.” In August, the newspaper chain Gannett paused use of a service called LedeAI in writing sports dispatches after some major flubs. Then later that month Microsoft came under fire for its article on 15 must-see attractions in Ottawa — the AI-generated article listed the Ottawa Food Bank third on the list and recommended cheerily to “consider going into it on an empty stomach.” Microsoft issued a statement that the cringe-worthy article was due to a failure in “human review.”

To cut to the chase, no duh.

Human review means editing, rewriting or jettisoning AI failures, which can be (as we have seen) very toxic to a brand. It means fact-checking everything, since AI seems to also be terrible at research and makes up a lot of stuff. Other dangers of AI writing include blandness and unoriginality. Sifting through existing content, by design, will never produce something bold and new. Readers are becoming adept at picking up on this. It was difficult six months ago to tell if a photo was AI-generated, but now if you post one all your friends will quickly point out errors in lighting, background angles, or number of fingers and shout “Booo! This is just AI!”

Perhaps the best summation I read about the notion of AI-generated copy came from a random post (sadly, I could not relocate it to credit the author, but it was most assuredly a human). “If no one could be bothered to write this, why should I be bothered to read it?”

Just remember, trusting technology too far when it comes to writing might save time over the short term, but it could come at the cost of your pubic image — ahem, I mean public image.

Celestia Ward is Imagine’s public relations coordinator and admits to being fully human — but looks forward to someday having at least a few cybernetic parts.

Previous Post
‘It’s not your fault’ – Behind the animation
Next Post
Boulder City Chamber of Commerce Case Study

Archives

Categories