It’s a known fact by now that algorithms pervade much of our digital life, be it feed for youtube, facebook etc or google photos identifying your faces. I also wrote earlier on why algorithms came to be so important and Eugene Wei’s article is an excellent sequel to that on the current utility and dominance (using Tiktok as an example).
However, I realised a lot of the concepts used in algorithms are true even outside the digital realm.
Exploration and exploitation
If you had the choice of going to a restaurant, would you choose one that you have been to or explore a new one?
If you were to listen to music, is it mostly stuff that you have listened to or new similar songs?
If you have done a project/job, do you go deeper or choose something else to do?
If you’d the choice to do something on a weekend, would you do things that you mostly do or try out some new stuff? Meet friends that you meet most often vs spend time with someone you haven’t seen in long?
All of these are ultimately exploitation vs exploration trade-offs. These pervade our life everywhere, and hence are also visible in the algorithms we interact with. Does Swiggy recommend past ordered restaurants over new ones? Or spotify/youtube hinges on tasted music vs new ones?
Most people prefer a higher tilt towards exploitation than exploration. Mainly, because exploration requires more energy, and humans often implicitly minimize energy. A second advantage of this tilt is also compounding, as you develop more familiarity with a topic with repeated exposure. The disadvantage, of course, is that you don’t experience different things.
The answer becomes very individualistic and contextual in larger decisions, like switching jobs, careers, cities etc.
Weight tuning
If the goal of a Youtube recommendation system is that user clicks on it and watches at least X secs of video, what all parameters do you consider for that goal (e.g. previous watch history, user demographics, similar users’ preferences etc)? And how do you weight them?
The life question equivalent of this is in choosing job, city, teams etc. With all the parameters that might matter in choosing a job - role, boss, company, culture, compensation, geography etc, how do place weights on these?
The problem becomes doubly harder in such cases as there are no real numbers to put there, only made up. When algorithms do weight-tuning, they at least get to work with real data. Despite this limitation, this framework may still be useful to get a directional sense1.
I assume the left-brain part of yours has geeked out a lot on this, whereas the right-brain might be dying inside, feeling discarded. That’s true though, one crucial component missing in ‘thinking like an algorithm’ is the feeling part - what if we just like something despite the weighted average of rational parameters looking other way. In that case, the weights don’t often capture this emotion or if they do, by definition, they will be biased and you would be okay with it.
Mathematics doesn’t have all the answers in life but it does help frame problems better sometimes. When to use this framework vs not, is again a meta exploitation vs exploration problem. :P
If you are honest about the made up numbers
Interested in reading more? Try out Opportunism and Conviction and Maslow Hierarchy for companies.
Interesting thought. I think we do capture data even in rather emotional decisions. The data being the strength and frequency of emotion we feel from the experience.
on other note, I think most personality types have 'explorer' as a very defined personality which pretty much relates to what you mention here. Just in relatively less mathematical way :)