The session's most alarming empirical claim - and the one that produced the most visible reaction from the audience - concerned attention spans. Doomscrolling, Rishab said, has become so habitual that people often fail to watch even a full reel to completion, with genuine attention spans shrinking to around nine seconds.
He was not catastrophizing. He was describing a mechanism. The algorithm is designed to maximize time on the platform, not to maximize the quality of the experience or the development of the person using it. It achieves this by giving each piece of content exactly enough time to trigger a response before offering the next stimulus. The result, accumulated across millions of users and billions of interactions, is a systematic training of the brain toward shorter and shorter loops of attention - a training that is, at this point, almost invisible because it has been normalized into ordinary digital behavior.
His personal counter-practice: deliberately liking content from opposing political viewpoints to confuse the algorithm and force exposure to perspectives that his default preferences would filter out. He described feeling, when he first started doing this, 'very smart.' The humour was real, but so was the point: the algorithm is shaping what you think you know, and it requires active, conscious interference to prevent it from becoming the primary curator of your worldview.
On AI specifically, he made a distinction that the audience found useful: AI tools like ChatGPT can be genuinely helpful in learning - as thinking partners, as research starting points, and as tools for generating options to evaluate. What they should not be is a substitute for thinking: the practice of using AI minutes before an exam to crack format - based answers is not learning. It is a way of performing learning while avoiding the cognitive work that learning requires. The prompt you write, he suggested, is itself a form of thinking that deserves more careful attention.