Defend Truth


The Other News Round-Up: What a bot I got

Marelise van der Merwe and Daily Maverick grew up together, so her past life increasingly resembles a speck in the rearview mirror. She vaguely recalls writing, editing, teaching and researching, before joining the Daily Maverick team as Production Editor. She spent a few years keeping vampire hours in order to bring you each shiny new edition (you're welcome) before venturing into the daylight to write features. She still blinks in the sunlight.

Each week, Daily Maverick brings you an outside-the-box round-up of some of the world’s odder happenings. This week: the English you don’t understand.

Call me a little late to the party, but I was tickled to death when I read that Facebook had shut down its recent chatbot experiment when it caught the two bots speaking a language they understood, but nobody else did.

Bob and Alice, the friendly bots, were instructed to negotiate with each other over some balls. Negotiations were successful, which places their communication skills a notch above some long-married couples I know, but the catch was that nobody understood what on earth they were going on about. A sample dialogue:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Our interest was having bots who could talk to people,” Facebook’s Mike Lewis told FastCoDesign.

So, no go then.

Interestingly, the babble was understood to be a kind of shorthand, where the repetition of certain phrases was taken to indicate how many balls Bob or Alice wanted – useful to them, but not really to humans. (Imagine trying to have a party: Halloween would require repeating “October” 31 times to each guest; never mind the requests for sweets. You’d never get the neighbour’s kids off your doorstep.)

Facebook and its acolytes were quick to emphasise that they did not shut down the experiment because they were afraid they had created Frankenstein’s monster – a potential generation of chatbots that could plan a Snapchat revolution or sneakily write Facebook suck to me to me to me to me behind its back and post it all over an unsuspecting Mark Zuckerberg’s wall in the night – but because the experiment hadn’t achieved what they wanted it to do.

Although why in heaven’s name one would want to talk to a robot on social media is also beyond me. It’s hard enough work having to remember the names of all your 1987 classmates’ kittens.

Maybe I’m just misanthropic. Some people seem to like that sort of thing. The iGod bot is still around and even seems to have hatched a few competitors, for those who wish to repent, confess or ask questions about Creation in one easy click. There’s also the Oracle bot for those who want to ask about their future. I tried it; our conversation went like this:

Me: What lies in my future?

Oracle bot: I see pleasure in your future!

Me: What kind?

Oracle: I’m sorry. I really don’t know.

Me: Of course you don’t. You’re a bot.

Oracle: No.

Me: What do you mean you’re not a bot? You are a bot.

Oracle: I’m sorry. I do not have the answers you seek.

Me: That’s because you’re a bot.

Oracle: Maybe.

I have to say, the Oracle bot wasn’t entirely unlike a real oracle, in that it issued calm, unreadable and ultimately quite annoying nothings in my general direction, which – now that I think about it – makes me wonder if call centre operators, bank tellers, telemarketers and the Telkom lady are actually spiritual guides we have been overlooking all this time. (Nobody can say they aren’t walking through fire every day, right?) I’m almost positive I have had a similar conversation recently with a nice chap trying to sell me insurance of some sort, who had a similarly calm but frustrating gift for dodging the simplest question with vaguely unsettling platitudes.

Insurance salesman: Not everybody is eligible for a secure future. You, ma’am, are our chosen one. May I continue?

Me: Are you trying to sell me something?

Salesman: I perceive you have not planned adequately for your future. I am here to help.

Me: But are you trying to sell me something?

Salesman: The future is uncertain. Yet you appear certain you have everything you need. I believe you do not.

et cetera.

But back to the point: if even humans are already talking past each other in the name of service, it doesn’t take much of a leap to figure out the purpose of the future social media bots. Bots are already used to offer “support” (dear me) to customers by various entities; the idea that they will boost engagement on social media simply underlines that we – the users – are not the customers but the product.

So no, I’m not worried that AI is going to take over the world and start plotting against us. I’m a lot more worried that we are plotting against ourselves. It’s already hard enough for humans to understand each other. If we have to start telling our problems to a bot, I have just one thing to say: Balls have zero to me. I can everything else. DM


Please peer review 3 community comments before your comment can be posted