Regardless of whether this happened or not, would training Bard on ChatGPT output be good or bad for Bard's product quality? I imagine there's a risk of AIs recursively reinforcing bad data in their models. This problem seems unavoidable as more web content becomes AI-generated content and spam.
This is my biggest fear in the space (aside from potential job displacement and the political outcomes), but AI basically eating its own dogfood, and regurgitating its already bad information. It could go south pretty quickly, and it's like a contagion, it can't be easily just removed from the system.