AI Language models, like GPT-3 or Google's LaMDA, are producing some incredibly realistic interaction mimicking human dialogue and raising important ethical issues about biases, hateful speech, and amplification of misinformation.
Now Meta/Facebook is jumping into the game with an AI Chatbox ("BlenderBot") that has been released "for research only" - but one can imagine the path between research and commercialization is a short one.
I've tested the model. Asking similar questions in various sessions results in widely varied responses -- which can be typical for AI -- but rather than elevate, FB seems ok with creating a flippant, dumbed-down machine-human interaction:
eg. $12 million in FB lobbying -- "will buy some influence!! Lol!!! :)"
Influence buying is a laughing matter?
eg. -- "around teh world"...
These are just a couple superficial and non-substantive interactions captured on a single screen-shot.
Granted, it is still early days for BlenderBot but given Facebook's track record, their entry into conversational AI is a worrisome development. Over the years, Facebook hasn't shown maturity or vision with product and corporate development - it has mainly jumped on trends. There's an opportunity to layer on a baseline level of intelligence to their AI models (ie. start with the simplest of things like spell check) rather than amplifying mistakes that dumb down America and the world - but one can only wonder if they are up to the challenge.4