Our VP of ML used this image in a presentation about the problem of truthfulness in generative AI. I think it sums up a model's notion of truth perfectly.
The prompt to create this image was "Salmon in a river"
An ML model can only derive truth from the patterns in the data it is given to learn from. In this example, most of its training images of salmon are product images of processed salmon for human consumption, not the fish in its natural habitat.
Are we the same? We might be more efficient, but do we derive truth fundamentally differently? We have all met people whose truth on a certain topic is made from surprisingly few primary facts.
What is beautiful about working in AI is being confronted, face-on, by questions of our own perceived intelligence. It is at once marvellous and humbling.