

It’s not going to stop spammers and foreign disinformation campaigns. Making companies responsible for what their AI can generate without giving them the option to provide it as a no-liability no-guarantees tool is just going to make them clamp down harder on censoring and lobotomizing their models to make sure they’re incapable of making false claims even if it renders them semi-useless. I do think they should need to make it abundantly clear that their language models can and will lie and make stuff up.
The Copilot app says, “Copilot uses AI. Check for mistakes.” I think they could be more clear, but they didn’t bury it in an EULA.