Ambient Intimacy is Intimacy

Image result for warm glow light

Ambient intimacy is seeing twitter's "What's happening?” prompt and answering, "How was your day?" instead. Given sufficiently personal expressions, what accretes is “real” intimacy — just digitally-transmitted instead of verbally-expressed over a kitchen sink. And even in the case of one-sided communications, granting the emitter your attention gives them access to the same cognitive machinery.

"I've never met them but I know them."

More often than not, you're not wrong.

I've met a lot of people through twitter. Generally, they match my pre-IRL expectations. The ones who don’t use tend to use Twitter as if it were LinkedIn but without suit-jackets. They dispatch TwitterCard-sealed news from their particular domain, and they share little personal information. The one’s I do recognize upon first bio-meeting? They aren’t using twitter as a social bookmarking service — they’re being social through self-expression.

“I never met them but I know them.”

Admittedly, Twitter is a lot of different things to different people. And, it’s often a confusing place because limited context makes it hard to distinguish what mode someone is in when they tweet. But, oddly enough, things get easier with ambient intimacy. When you have a personal model of someone in addition to an intellectual one, you see more of them. That doesn’t mean you won’t be wrong. But it does make the context incomparably richer — and, social.

That’s the real reason I care about anti-abuse generally and community boundary enforcement, specifically. Hat-tip to J. Corwin for introducing the term to me, but we live in a noosphere.

There are lots of similar words used to say the same thing about our increasingly hyper-connected information environment, but I like this one because, to me, it conveys how new and weird things are. We’re only at the beginning of what’s even possible, let alone what’s necessary.

And, I think social networks designing for ambient intimacy is a necessity. The distinction between “irl” vs “online” continues to erode. That richer model of what someone is saying that ambient intimacy affords? It’s error-correction repurposed from the former environment and adapted for the latter one. It helps us correctly receive what someone really means to say. Even if what they’re saying is silly, that’s useful.


Postscript: I’m not working on reflock — what was meant to be my twitter anti-abuse tool — anymore. There are a lot of reasons for that but it looks like Tracy Chou’s app mirrors a lot of what I was implementing. I haven’t used it so this isn’t an endorsement, but please signup for the waitlist if you have abuse problems. I hope it works well because I want you to share yourself freely!

The Real Supernode Problem

Barack Obama Tweets

At the time of writing, Barack Obama has over 110,000,000 Twitter followers.

Barack Obama is a supernode.

[Image caption: A digraph that has 199 nodes dependent on 1 central one that I did so this post has a media card for embedding but that, by happy accident, created some pixelated artifacts that resonate with the spirit of this post.]

The Technical Problem

As a service specification, assume:

  1. Subscribers (i.e. followers) should observe an Obama tweet quickly (low latency).

  2. No subscriber has privileged access in that all users share the same expectation of observability (fairness).

As a world-scale, real-time, distributed (albeit centralized) platform, their developers spend considerable resources on engineering low-latency bit plumbing. And, in the practical terms of end user experience, the distance in time between action and feedback matters.

But, consider the challenge of materializing Obama’s tweet in subscriber feed,

  • If you manage an on-disk circular queue of tweet ID’s for each user timeline and punt the full content to lookup requests on caching infrastructure, you’ll have to fan-out a 64 bit snowflake (tweet ID) to 110,000,000 disk or memory (or both) locations. That’s nearly a gigabyte of instantaneous writes.

  • If you were to send the full tweets instantly to every connected, subscribing client, N, and assumed that the size of the delivered tweet was 296 bytes — 280 characters plus two 8 byte integers for the snowflake and user ID — you’re writing (296 x N) bytes of payload alone to your network wire. With a million users, that’s about 280 MiB ignoring network overhead.

These are extremely naive simplifications. There are much smarter ways of doing things that account for social network dynamics and use reasonable heuristics.[1] But, it should be clear that the technical problem is not trivial, especially once you recall that other users are also active at any given moment.

As for (2) — well, realistically, it’s an intractable specification. Assume Obama tweeted from his Kalorama residence while connected to the nearest data center in Virginia.[2] My friends in Maryland will probably receive his tweet before me. In reality, packets traverse complex routes before arriving at their digital destinations. But, it’s reasonable to simplify by saying whoever is closer as the photon flies gets their 280 characters first.

There are technical means of mitigation (albeit with wild uncertainty and impressive computational expense). If you had a map that associated every subscriber with a distribution of their read latencies conditioned by datacenter as well as one for inter-datacenter latencies, you could use both to randomly select a distribution path for each tweet such that the shared expectation converges on a single value for all subscribers.[3]

Alternatively, you could discard (2) by asserting: sub-second arrival-time heterogeneity and the associated information arbitrage opportunities (and risks) are not the platform’s concern — which, I think, is perfectly reasonable.

Twitter isn’t an enterprise-grade messaging queue. It’s a social networking service.

The Social Issue

Taking a glance at twitter’s engineering blog, you’ll quickly discover that Twitter cares deeply about (1). I have no idea if they care about (2) — I just made it up. It’s reasonable-sounding only if you’re thinking in the wrong context or too many of them. Fairness is a laudable concern and entities like high-frequency traders do seem to systematically exploit technicalities in parasitic ways, but complexity is expensive and the boundary separating problems you can attend to from the fantasy land of those you wish you could is much nearer than you think. If you want an aphorism: pick your battles. If you want a philosophy: do one thing, and do it well.

So, what is Twitter’s one thing?

Simple: Acting in accordance with its fiduciary duty to investors.

This isn’t meant to be a provocative statement. Given that twitter is a corporation, it’s a legally-enshrined obligation. And, as a publicly-traded company, there are even more commitments attached. Of course, twitter does have to satisfy some demand, lest the users go elsewhere. Setting aside manufactured demand, you can simplify twitter as a multi-sided market for attention:

  • advertisers buy it;

  • twitter sells it;

  • users offer it in exchange for that delicious content.

Certainly, there is justifiable cause for concern with adtech. And, as someone who seems to be perpetually aware of squandered attention yet capable of resisting anything but temptation, I have serious issues with attention allocation under this particular business model (as implemented). But in stark terms, I don’t think twitter could have existed in any other form, historically-speaking. And, if you follow my ExTrEmElY oNlInE life, it’s obvious that get a lot out of it.

So, what’s my problem? Well, you see dear reader, I’m a true believer.

I think the reason for twitter’s existence, specifically, and social networking services, generally, should and can be removing the constraints of time and space from human social interactions! To facilitate the discovery of ideas, people, and connections that otherwise might not be possible! To expand the set of what we may perceive!

This may be romanticized view, but I’m a romantic at heart and this is my newsletter that your subscribing to, so I assume I’ll be forgiven any (over-)indulgence here. But, what does this have to do with supernodes? Well, at least for me, they’re one of the clearest examples of engineers pursuing (brilliant) technical solutions to mischaracterized problems in compliance with business model demands and while ignoring the social ones. That is, allowing supernodes is a problem by itself because,

SUPERNODES RESTRICT OUR SOCIAL ATTENTION.

This isn’t to say that I believe what Obama has to say doesn’t matter or that it doesn’t warrant my attention. I just don’t think it’s reasonable to computationally-encode and grant anyone that level of write access to our collective attention on a social medium. That’s not a social interaction, it merely simulates one. And in the confusion, all manner of negative consequences manifest.

Admittedly, even with technically-imposed limits — say a 30,000 follower maximum capacity — I’d still be aware of Obama’s tweets. As the 44th President of the United States, what he says is culturally and politically relevant, and it would traverse the social graph quiet readily. But the difference between percolation and direct distribution is vast — or, at least, I believe it is. I want a social medium that facilitates social interactions and discovery, rather than one that further entrenches the pathological hierarchy users initially (at least, I hope), meant to escape.

TL;DR?

Fuck supernodes. We don’t need them. Social tools should design against them.[4]


  1. For example, see: Feeding Frenzy: Selectively Materializing Users’ Event Feeds.

  2. I don’t actually know where twitter’s datacenters are located.

  3. This convoluted method is, hilariously, an absurd and still-wrong simplification.

  4. For all I know, Twitter does try to attenuate supernode signal strength. I’d applaud them if they do. But, the thing is: I can’t inspect their code, and unless Blue Sky is successful, it’s still a proprietary protocol that makes delivery opaque. I want agency and transparency, and Twitter limits both in this context. In any case, I absolutely hate footnoting the concluding sentence, but it’s the right thing to do.

May I have your attention, please

Allow me to introduce myself

My computer starts each (west coast) day by automatically tweeting,

The quote itself comes from a talk Simon gave in 1969 titled, Designing Organizations for an Information Rich World. Having completed a Ph.D. in Computational Social Science (CSS), Simon was unavoidable. And, rightly so! But — as someone whose attention and interest seems to experience an almost infinite regress absent careful environmental shaping — this quote became a mixture of mantra and mission to me. The first time I read it was the first time I seriously asked, "well...are we designing for an information rich world?"

I don’t think we are.

We certainly live in an information rich world. But I think the operationalized design principle is quiet different. While there is a(n often realized) danger in hyperbole, I don't think it's hyperbolic to state plainly: attention is both for sale and frequently sold.

This isn't a new phenomenon.

The competition for attention itself is part of the Darwinian struggle. The existence of markets for attention — formal or otherwise — is old. That's not inherently bad. And, I'm not sure how much the proportion of our attention actively allocated by markets has changed over time. But, I also think this line of reasoning misses a critical point. If "software is eating the world," why can't it also better manage my attention?

I don't have an answer yet.

My knee-jerk reaction is that, it's easier to sell freely-given attention than to demand money for a service that allocates it in a user-controlled way. In the context of social technologies, this is especially true given the logic of Metcalfe's law. Better design principles may exist, but evolutionary pressures on corporate entities select for ones that forge exponential moats the fastest. Alternatively, maybe we do have great tools for attention allocation, it's just that "work expands so as to fill the time available for its completion." In either case, I want to ask: Is this the best we can do?

I don't think so.

I'd like to find out.

Knowledge discovery and curation is an inherently a social activity. Even if you could resign yourself to a (boring) life of quiet isolation, you wouldn't want to! From "don't eat those red berries," to "check out this fascinating research paper," we is greater than the sum of the I parts. But, there is room for computer-mediated improvements, and I intend on contributing to that effort.

Perhaps this means I'm building and hacking on niche products. Maybe my struggle to remain focused really does reflect some inherent deficit which demands specialized design. But, I doubt it. "Information overload" is a problem so common that it requires no elaboration for clear articulation. If you feel similarly afflicted, subscribe to this newsletter, where I'll share what I find along the way.

Loading more posts…