Mental models for ML products
Avoid missing the forest for the trees with mental models
April 5, 2022—
5 minute read
Machine learning (ML) models have been rapidly gaining space and relevance in our society. They promise to disrupt numerous application areas by transforming decision-making and human-computer interaction.
Consequently, a single glance at the landscape of ML products is enough to leave anyone feeling overwhelmed by the vast variety of projects currently being developed. While striving to keep up with everything out there, it is easy to miss the forest for the trees.
That’s why using ML product mental models can come in handy.
According to Shane Parrish, author of “The Great Mental Models” series of books, “a mental model is simply a representation of how something works. We cannot keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks.“
Each ML product, despite its peculiarities, ends up fitting into a product mental model, an archetype. Furthermore, depending on the model that it fits into, the problems it faces and the important considerations to keep in mind are strikingly similar.
In this post, we expand on the three archetypes of ML products, which can be seen as mental models, presented in the full-stack deep learning course from UC Berkeley. Feel free to check them out for a great resource on many aspects related to deploying ML models in the real world.
Be the first to know by subscribing to the blog. You will be notified whenever there is a new post.
The first category of ML product archetype is software 2.0, which borrows its name from a famous blog post by Andrej Karpathy, Director of AI at Tesla.
The logic behind traditional software systems is written explicitly via rules. In contrast, ML models learn the logic that dictates their behavior from examples. The idea of software 2.0 products, then, is using ML systems to replace or improve rule-based systems.
An example of a product that fits this archetype is a video game AI that is powered by ML and improves a previous version that was based on heuristics. Another example would be improving a recommendation system using ML instead of a simpler system based on matrix factorization techniques.
Due to the recent astronomical successes of ML, it is common to see teams rushing to implement ML solutions to problems that could be more appropriately solved by rules. ML models are not silver bullets, and the developers of products that fall into the software 2.0 category need to think carefully if using ML indeed has the potential to improve their products.
Now, if you concluded that ML is the way to go and your product fits the software 2.0 archetype, there is a concept that you need to be aware of that can significantly improve your solution. It is the concept of a data flywheel.
Healthy ML products benefit tremendously from the virtuous circle provided by data flywheels. In general, if you have more (high-quality) data, your model improves and, if your model improves, you attract more users. If you manage to close the link between more users and more data, then you have successfully set up a data flywheel.
This virtuous cycle, if set up correctly, has the potential to drastically increase the quality of the models powering your product. However, measuring model improvement can be tricky and if by “better” you mean “with a higher accuracy”, your data flywheel might be broken even if you haven’t realized it yet. In one of our previous blog posts, we discussed important aspects of model evaluation, which you might find useful when trying to get high-quality models.
The second category of product archetype is human-in-the-loop (HIL) systems. In these products, the output of the ML model is first reviewed by a human before anything happens in the real world.
This is where an e-mail autocompletion system would fit in: the system makes the sentence suggestion and the human is the one who ultimately decides if they’ll use it or not. Another good example of a HIL system would be the use of an ML model to detect tumors on medical images. Such a system (at least for the near future) is unlikely to operate autonomously. Rather, they can be used to help radiologists with their processes.
For HIL systems, since there is ultimately a human interacting with the outputs of ML models, it is possible to leverage the design to provide a pleasant user experience. Furthermore, sometimes, it is possible to mitigate risks for ML model behavior with a design that takes into account how humans interact with the system.
It is also important to keep in mind the idea of the data flywheel for HIL systems. The ideal spot to be at is one where you have designed a system that makes the users help you construct a dataset. By doing so, the link between more users and more data is tight. For example, the suggested sentences that end up being used by the user are probably good, so this data can be used to augment your training set. Another way to accomplish this is by asking users for feedback. The radiologists that use the ML system to assess tumors might, for a given sample, provide feedback if the model was right or wrong.
Autonomous systems, as the name suggests, operate completely autonomously. There is no human in the loop and the outputs of the model act directly in the real world.
The classical example of an autonomous system would be a fully self-driving car powered by ML. The models are the ones in charge and their outputs are directly steering the car’s wheel.
Autonomous systems have a high impact but are much more challenging to develop and deploy than the previous two product archetypes. However, there is a way to increase the feasibility of autonomous systems, which is followed by many players in the industry: adding a human in the loop. By adding back a human in the loop it is possible to de-risk some of the model’s behaviors. This is the case, for example, of an autopilot that works only when there is a driver behind the wheel, to take over if things go sour.
When autonomous systems are temporarily demoted to HIL systems, it is possible to learn more about the problem and construct a more comprehensive dataset, which are both paramount to eventually going fully autonomous.
Currently, there is a lot going on in ML, and understanding all the projects is challenging. Mental models are constructs that help us make sense of it all by bundling complex ideas into simpler and understandable chunks. Hopefully, these ML product mental models are useful for you both when analyzing ML projects as well as when assessing which archetype your product fits in.