It's a question we hear often at DeepMirror, and it's not surprising. Across the media we see companies spending billions to build or incorporate the latest algorithms and models from drug discovery to construction. With so much hype, it's natural for companies to want the "best" AI on the market. But this question reveals a fundamental misunderstanding about how AI creates value. The assumption is that a good AI model is automatically applied most effectively. Our experience suggests otherwise. The product matters as much as the AI, if not more.
Let's take a step back and look at the current state of AI adoption in biopharma. On paper, there’s been a significant surge in pharmaceutical companies using or experimenting with AI in their R&D processes. Industry reports and conference presentations are full of exciting case studies and potential applications. But dig a little deeper, and you'll find that, in many cases, these AI tools rarely make it into the day-to-day work of medicinal chemists.
Why? Because having technology available doesn't mean it's accessible to the user. Technology needs to be embedded into a product. Having a combustion engine, for example, doesn't mean you have a usable car. Or who would have used GPT3 if it hadn't been for that slick interface? But building a product is not easy, you continually need to balance form and function to deliver an experience that creates value for individuals and/or companies.
Interestingly, the decision of whether a product creates value is often less rational than we think. Although we all want to believe that we make decisions about which software to use based on real, rational facts, when it comes down to it, we make that decision based on the feeling we have when we use a software. Can I see myself using this every day; is it easy to learn and navigate; do I feel frustrated or pleased by this software’s interface? This is especially true of the chemists we meet.
Take, for example, a conversation we had with a chemist at a recent conference. When discussing different algorithms for a particular task, he admitted, "I use this one because it's faster." Not because it's theoretically better or more accurate, but because he doesn't have to wait as long for results. These seemingly small factors - speed, ease of use, intuitive interfaces - often matter more in practice than having the most advanced algorithm. Especially, since that algorithm may work well in some cases and worse in others. This resistance isn't unique to chemistry. It's part of a broader trend in technology adoption, where the most sophisticated tools often fail to gain traction because they are cumbersome to use, and they don't fit naturally into existing workflows.
This realization has shaped our approach to our product at DeepMirror. Instead of focusing solely on developing the most advanced AI models, we have put equal - if not more - emphasis on making these models truly accessible and usable to the end-user.
For instance, when it comes to predicting molecular properties, we could run the most complex and computationally expensive approaches and select the best one. Instead, we intelligently sample and regularly review the possible approaches to keep the predictive speed within a duration that gives users a great experience and make sure that the algorithm performs well in the majority of different scenarios. It's not about having the most sophisticated model, but about having one that chemists can actually use in their day-to-day work.
Similarly, we've made a conscious decision to limit the set of controls for our predictive and generative capabilities. Predicting molecules doesn't have any controls: you press a single button, and everything is done for you. For generating molecules, the controls are high-level and aligned with common chemistry language that our users are fluent in hiding unnecessary complexity form our users. Just as cars don’t require drivers to understand how anti-lock brakes improve handling, scientific tools and software should not require users to understand intricate details around the loss functions used to generate them a prediction. We hope that this empowers users to make better decisions and enable them to focus on what they care about – finding better molecules– while we take care of what makes their lives better.
Now, some might argue that this approach oversimplifies things. Aren't we dumbing down the science? Aren't we taking control away from the users? These are valid concerns, but we'd argue that they miss the point. We're not removing complexity; we're just hiding it where it can be a source of distraction. Our users still have full control over the important decisions while we remove the unimportant and distracting ones. Too often software just becomes another time sink for researchers instead of freeing up their time. We are making it easier for them to make those decisions by presenting information in a way that aligns with their existing mental models.
Our product development process reflects this philosophy. We start with user voices and evidence to identify problems and needs, then brainstorm technical solutions as a team. For each potential solution, we ask: Is it feasible? Is it viable given our resources? Do we have evidence that users really need this? Is it usable in an intuitive manner? This process means that technological innovation is just one factor out of many that we consider. It's a stark contrast to companies that chase the latest AI breakthroughs without considering how they'll be used in practice.
The results speak for themselves. We have had customers who were initially skeptical about our claims of ease of use, only to become enthusiastic advocates after trying the software. One customer didn't believe us when we said we could onboard them within an hour, and then fell in love with the product once they saw how easy it was to use.
Of course, we are not perfect. There are still areas where we are not adhering to our product principles as well as we would like. For instance, we currently ask users to convert their data to a log scale when it spans several orders of magnitude, instead of doing this automatically for them. It's on our list of things to improve, because it goes against our philosophy of simplifying users' lives wherever possible.
So, what makes our AI "better"? It's not only about having the most advanced algorithms or the largest training datasets. It's equally about creating AI tools that amplify the skills and intuitions of experienced scientists, rather than asking scientists to adapt their work to the demands of an AI system. In this way, a user gets access to usable state-of-the-art technology.
The best AI isn't the one with the most impressive specs on paper. It's the one that scientists can use – and use effectively – in their day-to-day work. That's the kind of AI engine we're building at DeepMirror, and it's the kind of AI that we believe will make drug discovery faster.
Comments