WhatsApp has run into an embarrassing problem: when you use the WhatsApp sticker AI to generate new stickers, it adds guns to the child characters when you search for Palestinian children.
It is problematic and Meta also recognizes that: it promises The Guardian to take action quickly to resolve this problem. By the way, if you search for Israel and children, this never happens. Now you can argue about that, but what is actually going on here is that artificial intelligence really needs to be closely monitored: always. And although it would be best if there were more human moderators, there are also IT applications to prevent this kind of trouble.
You would think that a script could be written in which frequently used searches or searches that could be controversial or offensive are generated at a rapid pace. You can then feed that to the AI to see what it comes up with, and you can then release other AI to it that can check what exactly is shown in the image, such as guns… The strange thing about this is that a lot AI applications would never make a gun, that has often already been disabled.
AI needs control
So one AI is not the same as the other and that is exciting. This also means that you will have to deal with one person differently than with another. But it is clear that rules are needed to ensure that we can use it safely and soundly, especially because it is now seeping into everything. Even something as innocent as making stickers for a chat app can cause problems.
If you’ve never heard of WhatsApp’s sticker AI, that’s true. It is a new feature that is still somewhat in its infancy. It has been rolling out for a month now. The problem is that Meta doesn’t seem to be proactive enough. It says that it ‘makes the capabilities better as they evolve and more people share their feedback’: that’s not enough, is it?
Does it always have to be done right away?
Many more software items need to be built in to ensure that people have to flag things as little as possible. Now the damage has been done, now those terrible child soldier stickers exist. Anyway, it is already clear that Meta is sometimes not so careful with testing and checking versus quickly releasing and scoring: Instagram’s automatic translation system turns some Arabic words into ‘terrorist’, while they do not mean that…
Just because it’s possible doesn’t mean we should do it. Of course, when we use AI, we all want it to work as quickly as possible, but perhaps we should protect ourselves from ourselves and the AI, and still allow more time to investigate things better. Especially something as trivial as a sticker function, although this is a good example of how it is even more important with larger, more important artificial intelligence.