Playing with machine learning is both fun and inspiring, even for us, up in the small willage of Hallingdal, Norway. Let the creativity flow—generate an image of a swarm of glowing monkeys flying over a mountain lodge at night. Fun!
“Give me bullet points for a presentation about sustainability efforts in Flå Municipality”—voilà!
“Write a blog post about performance culture in speed skating or a compelling text about our charming mountain hotel”—it’s quick and seamless!
The article was first shared on Trendheim.no and the concept has also been discussed in Aftenposten and Studentersamfundet.
What have we learned after more than a year of using tools like ChatGPT at work? Can we trust machine-generated text (here, Google’s advanced language model suggests I use the “word” machine-gunned in Nowregian)?
Didn’t I read somewhere that these language models are inaccurate and sometimes lie?
Why does DALL•E generate images where people have six fingers, lack a foot, or an eye?
It might be time to reintroduce critical thinking—a highly human skill.
If we believe a model that generates text or images based on training and probability can deliver communication that genuinely resonates with and engages people, are we being overly optimistic?
Communication is first and foremost about empathy and a deep understanding of your audience. What do you know about your target group? Attitudes, opinions, habits, financial resources, family and job situations, interests, group affiliations, and desires—how do they live? Is it possible to craft our message to be both relevant and engaging?
Communication is first and foremost about empathy and a deep understanding of your audience.
Unfavorable intelligence is blindly believing that machine-generated (machine-gunned) texts and images can replace human input. Unfavorable intelligence is a lack of client-side expertise, where those assessing machine-generated content lack the competence to ensure quality. After all, there are worse things than people with six fingers.
Everything we publish—whether it resonates with our audience or not—affects perceptions of us as the sender. Brand, reputation, and our business’s opportunities now and in the future.
It’s possible that generative artificial intelligence will one day deliver content based on an interpretation of all conceivable human emotions broken down by geographical, psychological, and sociological micro-levels, but we’re not there yet. We’re nowhere close.
I’m rooting for critical thinking. For instance, you could start with the term artificial intelligence—could this be a marketing buzzword designed to create hype and attract investors? It certainly sells better than boring terms like machine learning and statistical language models.
Yes, I’m an intrigued heavy user of these tools myself. The point is to try to understand what they can and cannot deliver. The point is that all communication starts and ends with humans. For now, I’m against the uncritical use of machine-gunned content.