I have a model of everything. Everything I am, my understanding of the world, it all fits together like a web. New ideas fit by their relationship to what I already know - maybe I’m missing nodes to fit it in and I can’t accept it
Same, and I would add the clarification that I have a model for when and why people lie, tell the truth, or sincerely make false statements (mistake, having been lied to themselves, changed circumstances, etc.).
So that information comes in through a filter of both the subject matter, the speaker, and my model of the speaker’s own expertise and motivations, and all of those factors mixed together.
So as an example, let’s say my friend tells me that there’s a new Chinese restaurant in town that’s really good. I have to ask myself whether the friend’s taste in Chinese restaurants is reliable (and maybe I build that model based on proxies, like friend’s taste in restaurants in general, and how similar those tastes are with my own). But if it turns out that my friend is actually taking money to promote that restaurant, then the credibility of that recommendation plummets.
Same, and I would add the clarification that I have a model for when and why people lie, tell the truth, or sincerely make false statements (mistake, having been lied to themselves, changed circumstances, etc.).
So that information comes in through a filter of both the subject matter, the speaker, and my model of the speaker’s own expertise and motivations, and all of those factors mixed together.
So as an example, let’s say my friend tells me that there’s a new Chinese restaurant in town that’s really good. I have to ask myself whether the friend’s taste in Chinese restaurants is reliable (and maybe I build that model based on proxies, like friend’s taste in restaurants in general, and how similar those tastes are with my own). But if it turns out that my friend is actually taking money to promote that restaurant, then the credibility of that recommendation plummets.