On 19 Feb 2023, Nightfox said the following...
It's worse than that. A form of terrorism in the future will be poiso the data vacuumed up to feed these AIs.
That has been done before. In 2016, I heard Microsoft had released an AI chatbot (named Tay) that would post on Twitter, and within a day, people pollutted what it used to learn and made the chatbot post racist tweets
There was an episode of the Endless Thread podcast about this back in December:
"...we bring you the cautionary tale about Tay, a Microsoft AI chatbot that has lived on in infamy. Tay was originially modeled to be the bot-girl-next-door. But after only sixteen hours on Twitter, Tay was shut down for regurgitating white supremacist, racist and sexist talking points online.
Tay's short-lived run on the internet illuminated ethical issues in tech culture. In this episode of Good Bot, Bad Bot, we uncover who gets a say in what we build, how developers build it, and who is to blame when things take a dark turn."
https://www.wbur.org/endlessthread/2022/12/02/bots-tay-microsoft
Jay
... If you rely completely on protocol, you can become a robot
--- Mystic BBS v1.12 A49 2023/01/27 (Linux/64)
* Origin: Northern Realms | bbs.nrbbs.net | 289-424-5180 (21:3/110)