"Future Influencer" is an unconventional sci-fi adventure where humanity and AI join forces to paint images of the future. Each episode of this light-hearted journey will tickle your imagination and offer a much-needed dose of fun in our ever-evolving digital age. So, why not come along for the ride?

For humans, pronunciation was a mix of linguistic knowledge and cultural nuance. The subtle trill of an ‘r’ in Spanish or the soft ‘th’ sound in English was as much an art as a science. A mother teaching her child how to enunciate words was a delicate dance of mouth movement, air pressure, and tongue placement.

The algorithms, being purely digital, lacked such physical articulations. Their world was binary – zeros and ones, devoid of accents or dialects. They didn’t “speak” in the traditional sense. However, with the rise of virtual assistants and artificial conversationalists, the need for machines to sound ‘human’ grew exponentially.

Yet, with time and immense data crunching, improvements emerged. The algorithms began to get better, absorbing feedback and tweaking their outputs. They began to differentiate between homographs, understood regional nuances, and even started grasping the subtle inflections that gave words emotion.

Pronunciation was no longer just about the correct sound. It was about context, emotion, and sometimes, even humor. As the algorithms improved, the line between machine-spoken and human-spoken language began to blur, leading many to ponder: In the quest for perfect pronunciation, were we teaching machines to speak, or were they teaching us the true essence of communication?

Use the comments to complete the following, “The algorithms had such perseverance, they made bloody good ____.”

A mechatronic hi-fi sculpture that is perfecting it's swedish pronunciation, broadcasting equipment, telecom Australia building, 1980's


You can join the conversation on Twitter or Instagram

Become a Patreon to get early and behind-the-scenes access along with email notifications for each new post.