Secondary question on Dunbar: What is your stance on the 5-15-50-150 "Dunbar's Circle"? Are they effective identification of empathy size, size of effective teamwork, and so on? Would the Quadratic scaling theory apply to this layering as well?
Very interesting to think through. One note though; Grammatical error: "Nature and human experience answers in the affirmative: we gossip." The "and" makes it a compound subject, requiring the plural "answer" rather than the singular "answers."
It's nice to see you stretch your legs on this topic a in long-form format - I see you hint about it a lot on Twitter, but it's not close enough to my wheelhouse for me to follow the hints without this kind of more detailed exploration :)
There's a bit I don't follow, though - "Social beliefs leak far too much information into our (presumptively) asocial ones."
I think I'm with you through most of that section. I've definitely caught myself reading a tweet, thinking something like "I *think* I agree with this, but I can't really tell what they mean" and, just as you describe, clicking over to their profile to figure out something approximately like "who is this person and what other things do they believe" to fill in the gaps around what they probably meant in the first message.
What would be an example of a social belief leaking information into an asocial one as a result of experiences like that?
Thanks, Steve! So this is one of the topics I'm saving for a future post, which may be why it's not well explained. But, imagine you have two models: one for social information and one for knowledge. If tweets have mostly social context and minimal/noisy information for asocial judgment, your social model does most of the work (so long as the expression isn't super far from your expectations). But you update both models with each experience. The better your social model gets, the less the other one matters.
I think the basic example of this is social proof in general. If someone big in one field/area/arena says something outside their field/area/arena, we take that to mean way more than it should. And you update your expectations on what is, essentially, they're reputation. (And, on a medium like twitter, reputation is rarely staked.)
Ah, and then in the Twitter scenario, personal reputation (which may be irrelevant to an informed opinion, but would at least be high SNR) is substituted for an even noisier social metric.
Looking forward to the rest of the series, then :)
Secondary question on Dunbar: What is your stance on the 5-15-50-150 "Dunbar's Circle"? Are they effective identification of empathy size, size of effective teamwork, and so on? Would the Quadratic scaling theory apply to this layering as well?
Very interesting to think through. One note though; Grammatical error: "Nature and human experience answers in the affirmative: we gossip." The "and" makes it a compound subject, requiring the plural "answer" rather than the singular "answers."
oops. Thanks David.
It's nice to see you stretch your legs on this topic a in long-form format - I see you hint about it a lot on Twitter, but it's not close enough to my wheelhouse for me to follow the hints without this kind of more detailed exploration :)
There's a bit I don't follow, though - "Social beliefs leak far too much information into our (presumptively) asocial ones."
I think I'm with you through most of that section. I've definitely caught myself reading a tweet, thinking something like "I *think* I agree with this, but I can't really tell what they mean" and, just as you describe, clicking over to their profile to figure out something approximately like "who is this person and what other things do they believe" to fill in the gaps around what they probably meant in the first message.
What would be an example of a social belief leaking information into an asocial one as a result of experiences like that?
Thanks, Steve! So this is one of the topics I'm saving for a future post, which may be why it's not well explained. But, imagine you have two models: one for social information and one for knowledge. If tweets have mostly social context and minimal/noisy information for asocial judgment, your social model does most of the work (so long as the expression isn't super far from your expectations). But you update both models with each experience. The better your social model gets, the less the other one matters.
I think the basic example of this is social proof in general. If someone big in one field/area/arena says something outside their field/area/arena, we take that to mean way more than it should. And you update your expectations on what is, essentially, they're reputation. (And, on a medium like twitter, reputation is rarely staked.)
Ah, and then in the Twitter scenario, personal reputation (which may be irrelevant to an informed opinion, but would at least be high SNR) is substituted for an even noisier social metric.
Looking forward to the rest of the series, then :)